A discrete weighted Markov--Bernstein inequality for polynomials and sequences
aa r X i v : . [ m a t h . C A ] J u l A DISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITYFOR POLYNOMIALS AND SEQUENCES
DIMITAR K. DIMITROV AND GENO P. NIKOLOV
Abstract.
For parameters c ∈ (0 ,
1) and β >
0, let ℓ ( c, β ) be the Hilbertspace of real functions defined on N (i.e., real sequences), for which k f k c,β := ∞ X k =0 ( β ) k k ! c k [ f ( k )] < ∞ . We study the best (i.e., the smallest possible) constant γ n ( c, β ) in the discreteMarkov-Bernstein inequality k ∆ P k c,β ≤ γ n ( c, β ) k P k c,β , P ∈ P n , where P n is the set of real algebraic polynomials of degree at most n and∆ f ( x ) := f ( x + 1) − f ( x ) .We prove that(i) γ n ( c, ≤ √ c for every n ∈ N and lim n →∞ γ n ( c,
1) = 1 + 1 √ c ;(ii) For every fixed c ∈ (0 ,
1) , γ n ( c, β ) is a monotonically decreasing func-tion of β in (0 , ∞ ) ;(iii) For every fixed c ∈ (0 ,
1) and β > γ n ( c, β ) are bounded uniformly with respect to n .A similar Markov-Bernstein unequality is proved for sequences in ℓ ( c, β ) . Wealso establish a relation between the best Markov-Bernstein constants γ n ( c, β )and the smallest eigenvalues of certain explicitly given Jacobi matrices. Introduction and statement of the results
Throughout this paper P n and P C n stands for the set of real and complex alge-braic polynomials of degree not exceeding n and P for all real polynomials. Theinequalities of the form(1.1) k p ′ k ≤ c n k p k , p ∈ P n or p ∈ P C n , which hold for various norms are called Markov–Bernstein–type inequalities. An-drey Markov [26] settled the classical case of real polynomials and the uniform normin [ − , T n ( x ) = cos n arccos x , x ∈ [ − , c n is equalto T ′ n (1) = n . Later E. Hille, G. Szeg˝o and J. D. Tamarkin [16] proved inequality(1.1) for the norm in L p [ − , ≤ p < ∞ . The inequality(1.2) k p ′ k L p ( ∂ D ) ≤ n k p k L p ( ∂ D ) , p ∈ P C n , The first author is supported by the Brazilian foundations CNPq under Grant 306136/2017–1and FAPESP under Grants 2016/09906–0 and 2016/10357-1. The second author is supported inpart by the Bulgarian National Research Fund through Contract DN 02/14 and by Sofia UniversityResearch Fund through Contract 3067/2020. where, for 0 ≤ p ≤ ∞ , k f k L p ( ∂ D ) = (cid:18)Z π − π | f ( e iθ ) | p dθ (cid:19) /p , holds for every p >
0. It is usually called the Bernstein inequality though the firstproof for the case p = ∞ is due to M. Riesz [36] (see [29]). It was then establishedfor 1 ≤ p < ∞ and V. Arestov [3] settled the case p ∈ (0 , f is an entire function of exponential type σ ,such that f ∈ L p ( R ), then(1.3) k f ′ k L p ( R ) ≤ σ k f k L p ( R ) . We refer to [4, Theorem 11.3.3] and [34] for the cases 1 ≤ p ≤ ∞ and p ∈ (0 , c n = sup {k p ′ k / k p k : p ∈ P n , p = 0 } and in some cases it has been determined explicitly.When one considers the norm in a Hilbert space the sharp constant c n inMarkov’s inequality for polynomials is the largest eigenvalue of a certain matrix.Despite this fact, even in the L spaces induced by the classical weight func-tions of Jacobi ( w α,β ( x ) = (1 − x ) α (1 + x ) β , x ∈ [ − , α, β > − w α ( x ) = x α e − x , x ∈ (0 , ∞ )) and Hermite w H ( x ) = e − x , x ∈ ( −∞ , ∞ )), thesharp Markov constants are known only in few cases (here we do not discuss in-equalities relating different norms of a polynomial and its derivative). For theLaguerre case α = 0 P. Tur´an [41] proved that c n = (cid:16) π n + 2 (cid:17) − . while in the Hermite case, which is a straightforward one, c n = √ n and the Hermite polynomial H n ( x ) is the unique (up to a constant factor) extremalpolynomial. In the case of a constant weight function w ( x ) ≡ x ∈ [ − ,
1] (theLegendre case), E. Schmidt [38] proved that, with some R ∈ ( − , c n = (2 n + 3) π (cid:16) − π − n + 3) + 16 R (2 n + 3) (cid:17) . Without any claim for completeness, we mention that bounds for the best constantsin the L norms induced by the Laguerre or the Gegenbauer weight functions areobtained in [1, 2, 7, 8, 9, 30, 31, 32]. Regarding the asymptotic behaviour of the bestMarkov constant c n , we point out that c n is O ( n / ), O ( n ) and O ( n ) as n → ∞ in the cases of the L –norms induced by the Hermite, Laguerre, and Gegenbauerweight functions, respectively.Weighted versions of (1.2) for the so-called weights with doubling propertieswere established by G. Mastroianni and V. Totik [27] when 1 ≤ p ≤ ∞ and by T.Erdelyi [11] when 0 < p <
1. Recently D. Lubinsky [25] proved the weighted analogof (1.3) for entire functions of exponential type, for all p > x ) α , α ∈ R . ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 3
In this paper we study a discrete weighted Markov-Bernstein inequality for realalgebraic polynomials. For any pair of parameters ( c, β ) such that c ∈ (0 ,
1) and β >
0, the Meixner inner product and Meixner norm in P are defined by(1.4) h f, g i = h f, g i c,β := ∞ X k =0 ( β ) k k ! c k f ( k ) g ( k ) , where ( β ) k is the Poshhammer function, ( β ) k = β ( β + 1) · · · ( β + k − k ∈ N ,with ( β ) = 1, and(1.5) k f k c,β = h f, f i / c,β . The forward difference (shift) operator ∆ is defined by∆ f ( x ) := f ( x + 1) − f ( x ) , x ∈ N . The Markov-Bernstein inequality for P n associated with this norm is(1.6) k ∆ p k c,β ≤ γ n k p k c,β , p ∈ P n . We are interested in the best (the smallest possible) constant in (1.6),(1.7) γ n = γ n ( c, β ) := sup {k ∆ f k c,β : f ∈ P n , k f k c,β = 1 } . Our main result is the following theorem:
Theorem 1.1.
Let γ n ( c, β ) be the best constant in Markov-Bernstein inequality (1.6) . Then:(i) For every n ∈ N , γ n ( c, satisfies the inequality (1.8) γ n ( c, < √ c . Moreover, (1.9) lim n →∞ γ n ( c,
1) = 1 + 1 √ c . (ii) For every fixed n ∈ N and c ∈ (0 , , γ n ( c, β ) is a decreasing function of β ∈ (0 , ∞ ) .(iii) For every fixed c ∈ (0 , and β > there exists a constant C ( c, β ) > such that γ n ( c, β ) ≤ C ( c, β ) for every n ∈ N . Remark 1.2.
Inequality (1.6) is discrete for two reasons: the Meixner norm k·k c,β is a “discrete” one, and the derivative is replaced by the forward difference operator.Theorem 1.1(iii) reveals somewhat unusual phenomenon: while typically the sharpconstants in the Markov-Bernstein inequalities tend to infinity as n grows, here thesequence { γ n ( c, β ) } n ∈ N is bounded. Theorem 1.1(iii) follows from a Markov-Bernstein inequality for a wider set offunctions. Let ℓ ( c, β ) be the Hilbert space of real valued functions f defined on N (i.e., sequences f = ( f (0) , f (1) , . . . )) for which k f k c,β < ∞ . We prove thefollowing Markov-Bernstein inequality for sequences in ℓ ( c, β ) : Theorem 1.3.
Let c ∈ (0 , .(i) If β ≥ , then (1.10) k ∆ f k c,β ≤ (cid:16) √ c (cid:17) k f k c,β , f ∈ ℓ ( c, β ) and for β = 1 the constant √ c cannot be replaced by a smaller one. DIMITAR K. DIMITROV AND GENO P. NIKOLOV (ii) If < β ≤ , then (1.11) k ∆ f k c,β ≤ (cid:16) √ β c (cid:17) k f k c,β , f ∈ ℓ ( c, β ) . Set e γ ( c, β ) := sup f ∈ ℓ ( c,β ) f =0 k ∆ f k c,β k f k c,β , then Theorem 1.3 implies(1.12) e γ ( c, β ) ≤ ( √ c , β ≥ , √ β c , < β < . Since P n ⊂ ℓ ( c, β ) , we have γ n ( c, β ) ≤ e γ ( c, β ), n ∈ N , hence inequality (1.8) inTheorem 1.1(i) and Theorem 1.1(iii) are a consequence of (1.12)The rest of the paper is structured as follows. Theorem 1.3 is proven in Section 2.In Section 3 we give some properties of the Meixner polynomials, the orthogonalpolynomials with respect to the Meixner inner product. A relation between the bestMarkov constants γ n , n ∈ N , and the smallest eigenvalues of some Jacobi matricesis established in Section 4. It is worth noticing that this result applies to moregeneral situations, concerning the sharp constants in a wide class of polynomialinequalities in L -norms (see [1, 33]). In Section 5 we obtain two-sided estimatesfor γ n ( c,
1) which complete the proof of Theorem 1.1 (i). In Section 6 we apply theHellmann-Feynman theorem to prove Theorem 1.1 (ii). Section 7 contains somecomments. 2.
Proof of Theorem 1.3
Set e f ( · ) := f ( · + 1), then by the triangle inequality(2.1) k ∆ f k c,β = k e f − f k c,β ≤ k e f k c,β + k f k c,β . We have k e f k c,β = ∞ X k =0 c k ( β ) k ( k )! [ f ( k + 1)] = 1 c ∞ X k =1 c k ( β ) k k )! [ f ( k )] kk − β . Since kk − β ≤ , β ≥ β , < β ≤ k ∈ N , we conclude that k ˜ f k c,β ≤ k f k c,β √ c , β ≥ k f k c,β √ β c , < β ≤ . By substituting these upper bounds for k e f k c,β in the right-hand side of (2.1) weobtain inequalities (1.10) and (1.11). ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 5
It remains to prove the sharpness of the constant 1 + 1 / √ c in the case β = 1.For an arbitrary fixed n ∈ N we consider the sequence f ( k ) = ( − k c k/ , ≤ k ≤ n , k > n . We have k f k c, = n + 1 and∆ f ( k ) = ( − k +1 c k/ (cid:16) √ c (cid:17) , ≤ k ≤ n − − n +1 c n/ , k = n , k > n . Consequently, k ∆ f k c, = n − X k =0 (cid:16) √ c (cid:17) + 1 > n (cid:16) √ c (cid:17) . Hence, k ∆ f k c, > (cid:16) nn + 1 (cid:17) / (cid:16) √ c (cid:17) k f k c, and e γ ( c, ≥ lim n →∞ (cid:16) nn + 1 (cid:17) / (cid:16) √ c (cid:17) = 1 + 1 √ c . This inequality and (1.12) with β = 1 imply e γ ( c,
1) = 1 + 1 / √ c .3. Meixner polynomials
For any pair of parameters ( β, c ) such that β > c ∈ (0 , h f, g i = h f, g i c,β := ∞ X k =0 ( β ) k k ! c k f ( k ) g ( k ) , k f k c,β = h f, f i / c,β . The induced Hilbert space ℓ ( c, β ) = { f : k f k c,β < ∞} contains P and the cor-responding orthogonal polynomials are the Meixner polynomials { M n ( · ; β, c ) } n ∈ N ,defined by M n ( x ; β, c ) := F (cid:16) − n, − xβ (cid:12)(cid:12)(cid:12) − c (cid:17) . Here, F is the hypergeometric function, F (cid:16) p, qr (cid:12)(cid:12)(cid:12) t (cid:17) = ∞ X k =0 ( p ) k ( q ) k ( r ) k t k k ! . In the following lemma we collect some properties of Meixner polynomials.
Lemma 3.1.
The following are properties of Meixner polynomials:(i) Orthogonality: h M m , M n i := ∞ X x =0 ( β ) x x ! c x M m ( x ; β, c ) M n ( x ; β, c ) = c − n n !( β ) n (1 − c ) β δ m,n , m, n ∈ N ; DIMITAR K. DIMITROV AND GENO P. NIKOLOV (ii) Forward shift operator identity: ∆ M n ( x ; β, c ) := M n ( x + 1; β, c ) − M n ( x ; β, c ) = nβ c − c M n − ( x ; β + 1 , c ) ; (iii) Recurrence relation: ( n + β ) M n ( x ; β + 1 , c ) = β M n ( x ; β, c ) + n M n − ( x ; β + 1 , c ); (iv) Expansion formula: M n ( x ; β + 1 , c ) = n !( β + 1) n n X k =0 ( β ) k k ! M k ( x ; β, c ) , n ∈ N . Proof.
Properties (i) and (ii) are well known, see, e.g., [22, (1.9.2), (1.9.6)]. For theproof of property (iii), we write, with z = 1 − /c , the formulae for M n ( x ; β + 1 , c )and M n ( x ; β, c ): M n ( x ; β + 1 , c ) = 1 + n X k =1 (cid:18) nk (cid:19) x ( x − · · · ( x − k + 1)( β + 1)( β + 2) · · · ( β + k ) z k ,M n ( x ; β, c ) = 1 + n X k =1 (cid:18) nk (cid:19) x ( x − · · · ( x − k + 1) β ( β + 1) · · · ( β + k − z k . Subtracting the second equality multiplied by β from the first one multiplied by n + β , we obtain the result.The proof of property (iv) is by induction with respect to n . Obviously, theequality holds for n = 0, and we assume it is true for some n ∈ N . Property (iii)and the inductional hypothesis then imply M n +1 ( x ; β + 1 , c ) = βn + 1 + β M n +1 ( x ; β, c ) + n + 1 n + 1 + β M n ( x ; β + 1 , c )= βn + 1 + β M n +1 ( x ; β, c ) + ( n + 1)!( β + 1) n +1 n X k =0 ( β ) k k ! M k ( x ; β, c )= ( n + 1)!( β + 1) n +1 n +1 X k =0 ( β ) k k ! M k ( x ; β, c ) , which accomplishes the induction step. (cid:3) In view of Lemma 3.1 (i), the orthonormal Meixner polynomials { p m } m ∈ N aregiven by(3.1) p m ( x ; β, c ) := (1 − c ) β c m r ( β ) m m ! M m ( x ; β, c ) . The forward shift operator of the orthonormal Meixner polynomials obeys thefollowing representation:
Lemma 3.2.
For any m ∈ N , ∆ p m ( x ; β, c ) = c − c m − X k =0 α k α m p k ( x ; β, c ) , where (3.2) α k := c − k r ( β ) k k ! . ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 7
Proof.
From Lemma 3.1(ii), (iv) we have∆ M m ( x ; β, c ) = mβ c − c ( m − β + 1) m − m − X k =0 ( β ) k k ! M k ( x ; β, c )= c − c m !( β ) m m − X k =0 ( β ) k k ! M k ( x ; β, c ) , or, equivalently, ( β ) m m ! ∆ M m ( x ; β, c ) = c − c m − X k =0 ( β ) k k ! M k ( x ; β, c ) . In this identity we replace M k ( x ; β, c ) by M k ( x ; β, c ) = c − k (1 − c ) − β s k !( β ) k p k ( x ; β, c ) , k = 0 , . . . , m, and deduce the desired representation. (cid:3) Best Markov constants and extreme eigenvalues of Jacobimatrices
In seeking for the best Markov constant(4.1) γ n ( c, β ) := sup {k ∆ f k c,β : f ∈ P n , k f k c,β = 1 } , we may assume without loss of generality that f = t p + t p + · · · + t n p n = t ⊤ p , k f k = 1 = k t k = ( t + · · · + t n ) / = 1 , with t ⊤ = ( t , t , . . . , t n ) ∈ R n and p ⊤ = ( p , p , . . . , p n ). Indeed, since ∆ p = 0, p cannot yield an increase of k ∆ f k c,β .According to Lemma 3.2, we have∆ p = c − c A n p where p ⊤ := ( p , p , . . . , p n − ) and(4.2) A n := α α · · · · α α α α · · · α α α α α α · · · α α n α α n α α n · · · α n − α n . Hence, ∆ f = (1 − /c ) t ⊤ ∆ p = (1 − /c ) t ⊤ A n p , and k ∆ f k c,β = (1 − /c ) k t ⊤ A n k = (1 − /c ) h A ⊤ n t , A ⊤ n t i = (1 − /c ) h A n A ⊤ n t , t i . Therefore,(4.3) γ n ( c, β ) = (1 − /c ) sup k t k =1 h A n A ⊤ n t , t i = (1 − /c ) µ max ( c, β ) , where µ max = µ max ( c, β ) is the largest eigenvalue of the positive definite matrix A n A ⊤ n .Since A ⊤ n A n = A n − ( A n A ⊤ n ) A n , A ⊤ n A n ∼ A n A ⊤ n , and therefore µ max is alsothe largest eigenvalue of the positive definite matrix A ⊤ n A n . DIMITAR K. DIMITROV AND GENO P. NIKOLOV
It turns out that it is advantageous to work with the inverse matrices B n = ( A n A ⊤ n ) − , C n = ( A ⊤ n A n ) − , as we shall show that they are Jacobi matrices.Let us we find the explicit form of B n and C n . The matrix A n in (4.1) can berepresented in the form(4.4) A n = diag { α − k } T n diag (cid:8) α k − (cid:9) , where diag { α − k } and diag { α k − } are diagonal n × n matrices with entries on themain diagonal (1 /α , . . . , /α n ) and ( α , α , . . . , α n − ), respectively, and T n is an n × n triangular matrix with entries t i,j = 1, if i ≥ j , and t i,j = 0, otherwise.The matrices T n − and ( T ⊤ n ) − are two-diagonal, namely the only nonzero en-tries of T n − are t − k,k = 1, k = 1 , . . . , n , and t − k +1 ,k = − k = 1 , . . . , n −
1. Itfollows from (4.4) that B n = ( A n A ⊤ n ) − = diag { α k } ( T ⊤ n ) − diag { α − k − } T n − diag { α k } , and using the explicit form of T n − and ( T ⊤ n ) − , we conclude that B n is a tridi-agonal matrix whose diagonal entries are b k,k = 1 + α k /α k − , k = 1 , . . . , n − b n,n = α n /α n − , while the off-diagonal ones are b k,k +1 = b k +1 ,k = − α k +1 /α k , k = 1 , . . . , n − C n = ( A ⊤ n A n ) − = diag { α − k − } T n − diag { α k } ( T ⊤ n ) − diag { α − k − } so that that C n is a tridiagonal matrix whose diagonal entries are c , = α /α and c k,k = 1 + α k /α k − , k = 2 , . . . , n , and the off-diagonal ones are c k,k +1 = c k +1 ,k = − α k /α k − , k = 1 , . . . , n −
1. Replacement of the explicit values of α k from (3.2)yields(4.5) B n = βc + 1 − q β +12 c · · · − q β +12 c β +12 c + 1 − q β +23 c · · · − q β +23 c β +23 c + 1 · · · · · · β + n − n − c + 1 − q β + n − n c · · · − q β + n − n c β + n − n c , ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 9 (4.6) C n = βc − q βc · · · − q βc β +12 c + 1 − q β +12 c · · · − q β +12 c β +23 c + 1 · · · · · · β + n − n − c + 1 − q β + n − n − c · · · − q β + n − n − c β + n − n c + 1 Let ˜B n and ˜C n be the corresponding Jacobi matrices whose diagonal entriescoincide with those of B n and C n but the off-diagonal ones are opposite to thoseof B n and C n . Then obviously the eigenvalues of B n and ˜B n coincide and thoseof C n and ˜C n also do.Since µ max = 1 /λ min , where λ min is the smallest eigenvalue of either of thematrices B n , ˜B n , C n and ˜C n , (4.3) yields the following Theorem 4.1.
The best constant γ n ( c, β ) in the Markov-Bernstein inequality k ∆ p k c,β ≤ γ n k p k c,β , p ∈ P n admits the representation (4.7) γ n ( c, β ) = 1 /c − p λ min ( c, β ) , where λ min ( c, β ) > is the smallest eigenvalue of either of the matrices B n , ˜B n , C n and ˜C n . As is well-known, every n × n Jacobi matrix J n defines through a three termrecurrence relation a sequence of orthonormal polynomials { P m } nm =0 , and the zerosof P n are the eigenvalues of J n . We therefore may reformulate Theorem 4.1 as Theorem 4.1 ′ The best Markov constant γ n ( c, β ) admits the representation (4.7) ,where λ min ( c, β ) is the smallest zero of the n -th polynomial P n = P n ( c, β ; · ) in thesequence of polynomials defined recursively by P ( x ) = 1 , P ( x ) = x − βc ,P k ( x ) = (cid:16) x − β + k − k c − (cid:17) P k − ( x ) − β + k − k − c P k − ( x ) , k ≥ . Since β > c ∈ (0 , { P k } k ∈ N form asystem of orthogonal polynomials. Two-sided estimates for γ ( c, B = B n ( c, β ) and C = C n ( c, β ) have particularly simple form in thecase β = 1, for instance, D n := C n ( c,
1) = c − √ c · · · − √ c c + 1 − √ c · · · − √ c c + 1 · · · · · · c + 1 − √ c · · · − √ c c + 1 . We shall find estimates for λ min ( c, | λ E n − D n | = 0. Bychange of variable λ = 1 + 1 c + 2 z √ c this equation simplifies to ϕ n ( z ) = 0, where ϕ n ( z ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) z + √ c · · · z · · · z · · · · · · z · · · z (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . It is easy to see that(5.1) ϕ n ( z ) = 12 n (cid:0) U n ( z ) + √ c U n − ( z ) (cid:1) , where U m ( z ) is the m -th Chebyshev polynomial of second kind, U m ( z ) = cos( m + 1) arccos( z ) √ − z , z ∈ [ − , . Indeed, (5.1) is readily verified to be true for n = 1 ,
2, and { ϕ m } satisfy therecurrence relation ϕ m ( z ) = z ϕ m − ( z ) − ϕ m − ( z ) , m ≥ , which is also satisfied by { − m U m } . Lemma 5.1.
The zeros of ϕ n , n ≥ , are located in ( − , and interlace with thezeros of U n − . Moreover, if τ is the smallest zero of ϕ n , then τ = − ε n , n ( n + 1) < ε n < π n . Proof.
Clearly, ϕ n is a monic polynomial of degree n . Let η k = cos kπn , k =1 , . . . , n −
1, be the zeros of U n − , then − < η n − < η n − < · · · < η <
1, andsign ϕ ( η k ) = sign U n ( η k ) = ( − k , k = 1 , . . . , n − . This and ϕ n ( −
1) = ( − n − n (cid:0) n + 1 − n √ c (cid:1) , ϕ n (1) = 2 − n (cid:0) n + 1 + n √ c (cid:1) implythat the zeros of ϕ n lie in ( − ,
1) and interlace with the zeros of U n − . ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 11
The upper bound for ε n follows from τ < η n − = − cos πn = − π n . To obtain the lower bound for ε n , we apply one step of Newton’s method for finding τ as the smallest zero of ϕ n ( z ) with initial value τ (0) = −
1. We have τ > τ (1) = − ϕ n ( − ϕ ′ n ( −
1) = − n + 1 − n √ c ) n ( n + 1) (cid:0) n + 2 − ( n − √ c (cid:1) > − n ( n + 1) , where for the last inequality we have used that g ( x ) = n +1 − n xn +2 − ( n − x is a decreasingfunction in (0 , (cid:3) Going back to variable λ , we find λ min ( c,
1) = 1 + 1 c + 2 τ √ c = 1 + 1 c + 2( − ε n ) √ c = (cid:16) √ c − (cid:17) + 2 ε n √ c , hence(5.2) 1 λ min ( c,
1) = 1 (cid:16) √ c − (cid:17) (cid:16) √ c ε n (1 − √ c ) (cid:17) . Now (4.7), (5.2) and the estimates for ε n from Lemma 5.1 imply Theorem 5.2.
For any n ≥ , the best constant γ n ( c, in the Markov-Bernsteininequality k ∆ p k c, ≤ γ n ( c, k p k c, , p ∈ P n , admits the estimates (5.3) 1 + 1 √ c (cid:16) √ c (1 − √ c ) sin π n (cid:17) / ≤ γ n ( c, ≤ √ c (cid:16) √ c (1 − √ c ) n ( n + 1) (cid:17) / . Theorem 1.1(i) now follows from the two-sided estimates (5.3). Note that theupper estimate for γ n ( c,
1) in (5.3) sharpens the one in Theorem 1.1 (i).6.
Monotone dependence of eigenvalues on β The statement of Theorem 1.1 (ii) is a consequence of the following
Proposition 6.1.
For a fixed c ∈ (0 , , each eigenvalue λ of the matrix B n ( β, c ) ,defined by (4.5) , is a strictly monotone increasing function of β in the interval (0 , ∞ ) . We apply the elegant method to establish monotonicity of zeros of orthogonalpolynomials, or equivalently of eigenvalues of Jacobi matrices based on Hellmann-Feynman’s theorem [15, 12] and Wall-Wetzel’s criterion [43] for positive definitenessof Jacobi matrices. We describe it briefly and refer to Chapter 7.3 in Ismail’s book[18] as well as to [17, 19, 20] for more details. Consider the parametric sequence { p k ( x ; τ ) } ∞ k =0 of orthonormal polynomials which is generated by the three termrecurrence relation p − ( x ; τ ) = 0 ,p ( x ; τ ) = 1 ,x p k ( x ; τ ) = a k ( τ ) p k +1 ( x ; τ ) + b k ( τ ) p k ( x ; τ ) + a k − ( τ ) p k − ( x ; τ ) , k ≥ , where a k − ( τ ) >
0. The zeros of the polynomial p n ( x ; τ ) coincide with the eigen-values of the Jacobi matrix J n = J n ( τ ), whose diagonal entries are b k ( τ ), k =0 , . . . , n −
1, and the off-diagonal ones are a k ( τ ), k = 0 , . . . , n −
2. Moreover, if λ j = λ j ( τ ) is a zero of p n ( x ; τ ) and p j = ( p ( λ j ; τ ) , p ( λ j ; τ ) , . . . , p n − ( λ j ; τ )) ⊤ , then J n p j = λ j p j . Let us denote by J ′ n = J ′ n ( τ ) the tridiagonal matrix whose entries are the derivativesof the corresponding entries of J n ( τ ). Then the Hellmann-Feynman theorem, inthe particular case which is convenient for our objectives, reads as follows: Theorem 6.2.
For every zero λ j ( τ ) of p n ( x ; τ ) we have λ ′ j ( τ ) = p j ⊤ J n ′ p j p j ⊤ p j . Furthermore, if the numerator of the latter expression is positive, then the zeros λ j ( τ ) of p n ( x ; τ ) are increasing functions of τ . In particular, the latter statementholds if J ′ n is a positive definite matrix. Let us recall that a sequence { c n } ∞ of non-negative numbers is called a chainsequence if there exists another sequence { v n } ∞ , called a parametric one, such that0 ≤ v <
1, 0 < v n < n ∈ N , and c n = (1 − v n − ) v n for every n ∈ N (see [18]). A criterion for positive definiteness of Jacobi matrices, due to Wall andWetzel [43], applied to J ′ n yields: Proposition 6.3.
Let J ′ n be a Jacobi matrix with positive diagonal entries. If b ′ i > for i = 0 , . . . , n − , and there is a chain sequence { κ i } such that [ a ′ i ] b ′ i b ′ i +1 < κ i , for i = 0 , , . . . , n − , then J ′ n is positive definite.Proof of Proposition ˜B ′ n = (˜ b ′ ij ) n × n obtained from ˜B n by partial differentiation of its entries with respect to β is positivedefinite. Straightforward calculations show that ˜ b ′ k,k = 1 / ( k c ), k = 1 , . . . , n , and˜ b ′ k,k +1 = 12 p ( k + 1)( k + β ) c , k = 1 , . . . , n − . It follows by Proposition 6.3 and the fact that that { κ i } = { / , / , . . . } is achain sequence that a sufficient condition for ˜B ′ n to be positive definite is that thefollowing inequalities are satisfied:(6.1) [˜ b ′ k,k +1 ] ˜ b ′ k,k ˜ b ′ k +1 ,k +1 < , k = 1 , . . . , n − . They are equivalent to inequalities β + k (1 − c ) > , k = 1 , . . . , n − , which are obviously true, as β > c ∈ (0 , ˜B ′ n is a positive definitematrix. (cid:3) ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 13
In particular, the smallest eigenvalue of ˜ B , λ min ( c, β ), is a monotone increasingfunction of β . By Theorem 4.1, γ n ( c, β ) = (1 /c − / p λ min ( c, β ) is a monotonedecreasing function of β , which proves Theorem 1.1 (ii).7. Comments
The Markov-Bernstein inequality for sequences, Theorem 1.3, was rather easyto prove, and then was used in the proof of parts (i) and (iii) of Theorem 1.1, theMarkov-Bernstein inequality for polynomials. Since we observe, at least for β = 1,coincidence of the best Markov constant in ℓ ( c, β ) with the limit of γ n in thepolynomial case, a natural question is whether, on the contrary, Markov-Bernsteininequality for sequences can be deduced from Markov-Bernstein inequality for poly-nomials. Such an approach would be possible if the corresponding general Bern-stein’s problem had a solution in this particular case. Indeed, let us suppose thatthe following question has an affirmative answer: Is it true that, given c ∈ (0 , β ∈ [1 , ∞ ), a sequence f ∈ ℓ ( c, β ) and ε >
0, there is an algebraic polynomial p ,such that ∞ X k =0 ( β ) k k ! c k [ f ( k ) − p ( k )] < ε ?Then the Markov-Bernstein inequality for sequences would be an immediate con-sequence of the Markov-Bernstein inequality for polynomials.A general version of Bernstein’s approximation problem reads as follows: given aweight function W : R → [0 ,
1] and the corresponding weighted norm is k · k W , say L pW ( R ), such that the corresponding moments are finite in the norm, i.e. k P k W < ∞ for every P ∈ P , is it true that for every function f with k f k W < ∞ and each ε >
0, there is an algebraic polynomial P such that k f − P k W < ε ? The weights W for which this problem has an affirmative answer are sometimes called admissibleones. The first results concerning characterisation of the admissible kernels forthe uniform norm were obtained by S. N. Mergelyan, N. I. Akhiezer, H. Pollard,S. Izumi and T. Kawata, M. Dzrbasjan and L. Carleson. We refer to Lubisnky’ssurvey [24] and P. Koosis’s [23] and M. Ganzburg’s [14] books for details, furthercontributions and references.The result in this direction which is the most relevant in our situation is due toG. Freud [13, Theorem 3.3. on p. 73] (see also [24, Theorem 1.7]). His contributionis an extension of M. Riesz’ one [37] which was obtained even before Bernsteinposed his problem. G. Freud’s result applies to our problem but only for sequencesin ℓ ,c,β of at most polynomial growth, despite that it implies that such sequencewould posses one-sided polynomial approximations.We rase also a problem which is the “continuous” counterpart of the one statedabove. More precisely, it would be of interest to know if W ( x ) = e − ax Γ( x + β ) / Γ( x +1), where a > β >
1, is an admissible weight for Bernstein’s approximationproblem in L pW (0 , ∞ ), for p ≥
1. The above mentioned results of S. Izumi and T.Kawata [21], M. Dzrbasjan [10] and L. Carleson [6] imply that this is true for β = 1.Finally, it would be interest to study the eventual extensions of the results inTheorems 1.3 and 1.1 for the relevant weighted ℓ p norms. Acknowledgments
The first author thanks Doron Lubinsky for his interest in and valuable commentsabout the discrete version of Bernstein’s approximation problem stated in the lastsection. The second author thanks Vilmos Totik for the fruitful discussions onthe results in this paper. Both authors are grateful to the anonymous referee ofthe previous version of the paper, whose careful reading and valuable suggestionscontributed to improvement of the presentation.
References [1] D. Aleksov and G. Nikolov, Markov L inequality with the Gegenbauer weight, J. Approx.Theory (2018), 224–241.[2] D. Aleksov, G. Nikolov, A. Shadrin, On the Markov inequality in the L norm with theGegenbauer weight, J. Approx. Theory (2016), 9–20.[3] V. V. Arestov, On integral inequalities for trigonometric polynomials and their derivatives,
Math. USSR-Izv.
Approximation TheoryX. Abstract and Classical Analysis , (C. K. Chui, L. L. Schumaker, and J. Stoeckler, Eds.),Vanderbilt University Press, 2002, pp. 31-90.[6] L. Carleson, Bernstein’s approximation problem,
Proc. Amer. Math. Soc. (1951), 953–961.[7] P. D¨orfler, New inequalities of Markov type, SIAM J. Math. Anal. (1987), 490–494.[8] P. D¨orfler, ¨Uber die bestm¨ogliche Konstante in Markov-Ungleichungen mit Laguerre Gewicht.¨Osterreich. Akad. Wiss. Math.-Natur. Kl. Sitzungsber. II , 13–20 (1991)[9] P. D¨orfler, Asymptotics of the best constant in a certain Markov-type inequality, J. Approx.Theory (2002), 84–97.[10] M. M. Dzrbasjan, On metrical criteria of completeness of systems of polynomials in un-bounded domains,
Dokl. Acad. Nauk Armenian SSR (1947), 3–10.[11] T. Erdelyi, Notes on inequalities with doubling weights, J. Approx. Theory (1999), 60–72.[12] R. P. Feynman, Forces in molecules.
Phys. Rev. (1939), 340–343.[13] G. Freud, Orthogonal Polynomials, Akad´emiai Kiad´o/Pergamon Press, Budapest, 1971.[14] M. I. Ganzburg, Limit Theorems for Polynomial Approximation with Exponential Weights,Mem. Amer.Math. Soc., 897 (2008).[15] H. G. A. Hellmann. Zur rolle der kinetischen Elektronenenergie f¨ur die zweischen-atomarenKr¨afte. Z. Phys. (1933), 180–190.[16] E. Hille, G, Szeg˝o and J. D. Tamarkin, On some generalizations of a theorem of A. Markoff, Duke Math. J.
Adv. Appl. Math. (1987), 111–118.[18] M. E. H. Ismail, Classical and Quantum Orthogonal Polynomials in One Variable, Encyclo-pedia of Mathematics and its Applications, Vol. 98, Cambridge University Press, Cambridge,2005.[19] M. E. H. Ismail and M. E. Muldoon, A discrete approach to monotonicity of zeros of orthog-onal polynomials, Trans. Amer. Math. Soc. textbf323 (1991), 65–78.[20] M. E. H. Ismail and R. Zhang, On the Hellmann-Feynman theorem and the variation of zerosof certain special functions,
Adv. Appl. Math. (1988), 439–446.[21] S. Izumi and T. Kawata, Quasi-analytic class and closure of { t n } in the interval ( −∞ , ∞ )), Tohoku Math. J. textbf43 (1937), 267–273.[22] R. Koekoek, R. F. Swarttouw, The Askey-scheme of hypergeometric orthogonal poly-nomials and its q -analogue, Report 98-17, Delft University of Technology, 1998, http://homepage.tudelft.nl/11r49/documents/as98.pdf .[23] P. Koosis, The Logarithmic Integral I, Cambridge University Press, Cambridge, 1988.[24] D. Lubinsky, A survey of weighted polynomial approximation with exponential weights, Sur-veys in Approximation Theory (2007), 1–105.[25] D. Lubinsky, Weighted Markov-Bernstein inequalities for entire functions of exponential type, Publ. Inst. Math. (Beograd) (N.S.)
96 (110) (2014), 181–192.
ISCRETE WEIGHTED MARKOV–BERNSTEIN INEQUALITY 15 [26] A. A. Markov, On a question of D. I. Mendeleev. Zapiski Petersb. Akad. Nauk , 1–24 (1889)(in Russian). Available also at: [27] G. Mastroianni and V. Totik, Weighted polynomial inequalities with doubling and A ∞ weights, Constr. Approx. (2000), 37–71.[28] G. V. Milovanovi´c, D. S. Mitrinovi´c, Th. M. Rassias, Topics in Polynomials: Extremal Prob-lems, Inequalities, Zeros, World Scientific, Singapore, 1994.[29] P. Nevai and The Anonimous Referee, The Bernstein inequality and the Schur inequalitiesare equivalent, J. Approx. Theory (2014), 103–109.[30] G. Nikolov, Markov-type inequalities in the L -norms induced by the Tchebycheff weights, Arch. Ineq. Appl. (2003), 361–376.[31] G. Nikolov, A. Shadrin, Markov L -Inequality with the Laguerre Weight. In: ConstructiveTheory of Functions, Sozopol 2018 (K. Ivanov, G. Nikolov, and R. Uluchev, Eds.), ProfessorMarin Drinov Academic Publishing House, Sofia, 2018, pp. 207–221.[32] G. Nikolov, A. Shadrin, On the Markov inequality in the L –norm with the Gegenbauerweight, Constr. Approx. (1), 1–27 (2019).[33] G. Nikolov, A. Shadrin, Markov-type inequalities and extreme zeros of orthogonal polynomials(submitted).[34] Q.I. Rahman, G. Schmeisser, L p inequalities for entire functions of exponential type, Trans.Amer. Math. Soc. (1990), 91–103.[35] Q.I. Rahman, G. Schmeisser, Analytic Theory of Polynomials, Clarendon Press, Oxford,2002.[36] M. Riesz, Formule d’interpolation pour la deriv´ee d’un polynome trigonom´etrique, C. R.Math. Acad. Sci. Paris 158 (1914) 1152–1154.[37] M. Riesz, Sur le probl`eme des moments et le th´eor`eme de Parseval correspondent,
Acta Lit.ac Sci. (Szeged) (1922–23), 209–225.[38] E. Schmidt, ¨Uber die nebst ihren Ableitungen orthogonalen Polynomensysteme und daszugeh¨orige Extremum. Math. Anal. , (1944) 165–204.[39] A. Shadrin, Twelve proofs of the Markov inequality. In: Approximation Theory: A volumededicated to Borislav Bojanov (D. K. Dimitrov, G. Nikolov, and R. Uluchev, Eds.), Profes-sor Marin Drinov Academic Publishing House, Sofia, 2004, pp. 233–298. Available also at: .[40] G. Szeg˝o, Orthogonal Polynomials, 4th ed., Amer. Math. Soc. Coll. Publ., Vol. 23, Providence,RI, 1975.[41] P. Tur´an, Remark on a theorem of Ehrhard Schmidt,
Mathematica (Cluj) (1960), 373–378.[42] E. A. van Doorn, Reprersentations and bounds for the zeros of orthogonal polynomials andeigenvalues of sign-symmetric tri-diagonal matrices, J. Approx Theory (1987), 254–266.[43] H. S. Wall and M. Wetzel, Quadratic forms and convergence regions for continued fractions, Duke Math. J. (1944), 89–102. Departamento de Matem´atica Aplicada, IBILCE, Universidade Estadual Paulista,15054-000 S˜ao Jos´e do Rio Preto, SP, Brazil
E-mail address : d k [email protected] Faculty of Mathematics and Informatics, Sofia University ”St. Kliment Ohridski”,5 James Bourchier Blvd., 1164 Sofia, Bulgaria
E-mail address ::