A note on the U,V method of estimation
aa r X i v : . [ m a t h . S T ] A ug IMS Lecture Notes–Monograph SeriesComplex Datasets and Inverse Problems: Tomography, Networks and Beyond
Vol. 54 (2007) 172–176c (cid:13)
Institute of Mathematical Statistics, 2007DOI:
A note on the
U, V method of estimation ∗ Arthur Cohen1 and Harold Sackrowitz1
Rutgers University
Abstract:
The
U, V method of estimation provides unbiased estimators orpredictors of random quantities. The method was introduced by Robbins [3]and subsequently studied in a series of papers by Robbins and Zhang. (SeeZhang [5].) Practical applications of the method are featured in these papers.We demonstrate that for one U function (one for which there is an importantapplication) the V estimator is inadmissible for a wide class of loss functions.For another important U function the V estimator is admissible for the squarederror loss function.
1. Introduction
The
U, V method of estimation was introduced by Robbins [3]. The method appliesto estimating random quantities in an unbiased way, where unbiasedness is definedas follows: The expected value of the estimator equals the expected value of therandom quantity to be estimated. More specifically, suppose X j , j = 1 , . . . , n , arerandom variables whose density (or mass) function is denoted by f X i ( x i | θ i ). In thispaper we consider estimands of the form(1.1) S ( X , θ ) = n X j =1 U ∗ ( X j , θ j ) , where X = ( X , . . . , X n ) ′ and θ = ( θ , . . . , θ n ) ′ . An estimator, V ( X ) is an unbiasedestimator of S if(1.2) E θ V ( X ) = E θ ( S ( X , θ )) . Of particular interest in applications are estimands of the form U ∗ ( X j , θ j ) = U ( X j ) θ j , where U ( · ) is an indicator function. Robbins [3] offers a number of ex-amples of unbiased estimators using the U, V method. Zhang [5] studies the
U, V method for estimating S and provides conditions under which the “ U, V ” estima-tors are asymptotically efficient. Zhang [5] then presents a Poisson example thatdeals with a practical problem involving motor vehicle accidents.In this note we demonstrate that for many practical applications the
U, V es-timators are inadmissible for many sensible loss functions. In particular, for thePoisson example given in Zhang [5], for the U function given, the V estimator is in-admissible for any reasonable loss function, since the estimator is positive for some X when S = 0 no matter which θ is true.Previously, Sackrowitz and Samuel-Cahn [4] showed that the U, V estimator ofthe selected mean of two independent negative exponential distributions is inad-missible for squared error loss. ∗ Research supported by NSF Grant DMS-0457248 and NSA Grant H98230-06-1-007. Department of Statistics and Biostatistics, Hill Center, Busch Campus, Pisctaway NJ 08854-8019, USA, e-mail: [email protected] ; [email protected] AMS 2000 subject classifications: primary 62C15; secondary 62F15.
Keywords and phrases: admissibility, unbiased estimators, asymptotic efficiency.172 , V estimation
In the next section we examine examples in which S functions based on simple U functions are estimated by inadmissible V functions. For other simple U functionsthe resulting V estimators are admissible for squared error loss. These later resultswill be presented in Section 3.
2. Inadmissibility results
Let X j , j = 1 , . . . , n , be independent random variables with density f X i ( x i | θ i ). Let U ∗ ( X j , θ j ) = U ( X j ) θ j , where, for some fixed A ≥ U ( X j ) = (cid:26) , if X j ≤ A ,0 , if X j > A .Consider the following four distributions for X j .Poisson f X ( x | θ ) = e − θ θ x /x ! ( θ > , x = 0 , , . . . ) , (2.2) Geometric f X ( x | θ ) = (1 − θ ) θ x (0 < θ < , x = 0 , , . . . ) , (2.3) Exponential f X ( x | θ ) = (1 /θ ) e − x/θ ( θ > , x > , (2.4) Uniform Scale f X ( x | θ ) = 1 /θ (0 < x < θ, θ > . (2.5)Let W ( t ), t ≥ W (0) = 0 and W ( t ) > t >
0. Consider loss functions(2.6) W ( a, S ) = W ( a − S ) , for action a .For the distributions in (2.2), (2.3), (2.4), (2.5), Robbins [3] finds unique unbiasedestimators V ( X j ) for U ( X j ) θ j . Theorem 2.1.
Let X j , j = 1 , . . . , n , be independent random variables whose distri-bution is (2.2) or (2.3) or (2.4) or (2.5). Consider the loss function given in (2.6).Let U ( X j ) be as in (2.1). Then the unbiased estimator V ( X ) = P nj =1 V ( X j ) , where V ( X j ) is the unbiased estimator of U ( X j ) θ j , is inadmissible for S given in (1.1).Proof. The idea of the proof is easily seen if n = 1. However for n > n = 1 goesas follows: Let X be X and θ be θ . The V ( X ) estimators for the four cases aregiven in Robbins [3]. For the Poisson case V ( X ) = U ( X − X ( V (0) = 0). Now let[ A ] denote the largest integer in A less that A . Then V ([ A ] + 1) = [ A ] + 1, whereas S = U ([ A ] + 1) θ = 0.If V ∗ ( X ) = (cid:26) V ( X ) , all X except X = [ A ] + 1 , , X = [ A ] + 1 , then clearly V ∗ ( X ) is better than V ( X ) since W ( V ∗ ([ A ] + 1) − S ) = 0 for V ∗ and W (([ A ] + 1) − S ) > V . For the case of arbitrary n , S = 0 whenever all X j ≥ ([ A ] + 1) whereas V ( X ) = 0 whenever at least one X j = ([ A ] + 1). If all X j = ([ A ] + 1), then V = n ([ A ] + 1). Clearly if V ∗ = 0 at such X , V ∗ is betterthan V .For the geometric distribution when n = 1, V ( X ) = P X − i =0 U ( i ) ( V (0) = 0).Note S = 0 for X ≥ [ A ] + 1 but V = [ A ] + 1 for all such X . Again if V ∗ = V for X ≤ [ A ] and V ∗ = 0 for X ≥ [ A ] + 1, V ∗ is better than V . The case of arbitrary A. Cohen and H. B. Sackrowitz
Table 1
Improvement in risk for squared error loss function nA n is even more dramatic than is the Poisson case with S = 0 if all X j ≥ [ A ] + 1whereas V = 0 on such points.For the exponential distribution when n = 1, V ( X ) = R X U ( t ) dt = X if X ≤ A ,and V ( X ) = A if X > A . For arbitrary n , S = 0 whenever all X j > A , whereas V ( X ) = 0 on such points.For the scale parameter of a uniform distribution with n = 1, V ( X ) = XU ( X ) + R X U ( t ) dt which becomes 2 X if X ≤ A and A if X > A . Hence as in the previouscase, for arbitrary n , S = 0 whenever all X j > A whereas V ( X ) = 0 on such points.This completes the proof of the theorem. Remark 2.1.
Theorem 2.1 applies to the Poisson example in Zhang [5].
Remark 2.2.
If the loss function in (2.6) is squared error then the amount ofimprovement in risk of V ∗ over V depends on n , A , and θ . It can be easily calculated.For the case where all the components of θ are equal and each θ i , i = 1 , . . . , n isset equal to [ A ] + 1 the amount of improvement is equal to P ni =1 (cid:0) i ([ A ] + 1) (cid:1) C ni e − ([ A ]+1) ([ A ] + 1) [ A ]+1 ([ A ] + 1)! · (cid:18) − P [ A ]+1 y =0 e − ([ A ]+1) ([ A ] + 1) y y ! (cid:19) (2.7)Table 1 offers the amount of improvement for n = 1(1)10 and for values of A = 1 , , , ,
9. We observe as n gets large the amount of improvement becomessmaller. Also for small n as A gets large, improvement gets large. Such observationsare consistent with the asymptotic efficiency of the U, V estimator as n → ∞ andwith Sterling’s formula. Remark 2.3.
Theorem 2.1 also holds for predicting S ∗ = n X j =1 Y j U ( X j ) , where Y j has the same distribution of X j but is unobserved.
3. Admissibility results
In this section we consider the case(3.1) U ( X j ) = (cid:26) , if X j ≤ A, , if X j > A, A ≥ j = 1 , . . . , n. Also we consider a squared error loss function. , V estimation
Theorem 3.1.
Suppose X j are independent with Poisson distributions with pa-rameter λ j . Then V ( X ) is an admissible estimator of S ( X , λ ) for squared errorloss.Proof. Let n = 1 and recall V ( X ) = U ( X − X , V (0) = 0. Then V ( X ) = (cid:26) , for X = 0 , , . . . , [ A ] + 1 ,X , for X > [ A ] + 1 , while U ∗ ( X , λ ) = U ( X ) λ = (cid:26) , X ≤ [ A ] ,λ , X ≥ [ A ] + 1Since U ∗ ( X , λ ) = 0 for X ≤ [ A ], any admissible estimator of U ∗ ( X , λ ) mustestimate 0 for X ≤ [ A ] as V ( X ) does.At this point we can restrict the class of estimators to all those which estimateby the value 0 for all X ≤ [ A ]. For [ X ] ≥ [ A ] + 1, U ∗ ( X , λ ) = λ and we have atraditional problem of estimating a parameter λ . Now we can refer to the proof ofLemma 5.2 of Brown and Farrell [1] to conclude that any estimator that can beat V ( X ) would have to estimate 0 at X = [ A ] + 1. Furthermore for the conditionalproblem given X > [ A ] + 1, it follows by results in Johnstone [2] that X is anadmissible estimator of λ .For arbitrary n the proof is more detailed. We give the details for n = 2. Theextension for arbitrary n will follow the steps for n = 2 and employ induction. For n = 2, suppose V ( X ) + V ( X ) is inadmissible. Then there exists δ ∗ ( X , X ) suchthat ∞ X x =0 ∞ X x =0 (cid:0) V ( x ) + V ( x ) − U ( x ) λ − U ( x ) λ (cid:1) λ x λ x e − λ − λ (cid:14) x ! x ! ≥ ∞ X x =0 ∞ X x =0 (cid:0) δ ∗ ( x , x ) − U ( x ) λ − U ( x ) λ (cid:1) λ x λ x e − λ − λ (cid:14) x ! x !(3.2)for all λ > , λ >
0, with strict inequality for some λ and λ .Now let λ →
0. Then by continuity of the risk function, (3.2) leads to(3.3) E (cid:26)(cid:0) V ( X ) − U ( X ) λ (cid:1) (cid:27) ≥ E (cid:26)(cid:0) δ ∗ ( X , − U ( X ) λ (cid:1) (cid:27) . Since V ( X ) is admissible for U ( X ) λ , the case n = 1, (3.3) implies that V ( X ) = δ ∗ ( X , λ . Reconsider (3.2) but now we can let the sum on x run from 1to ∞ since V ( X ) = δ ∗ ( X , λ → V ( X ) = δ ∗ ( X , X = 0 , , . . . , [ A ] + 1. Furthermore by symmetry V ( X ) = δ ∗ (0 , X ) = · · · = δ ∗ ([ A ] + 1 , X ). Thus V ( X ) + V ( X ) = δ ∗ ( X , X )on all sample points except the set B = ( X ≥ [ A ] + 2 , X ≥ [ A ] + 2). Here V ( X ) + V ( X ) = X + X and S = λ + λ . We consider the conditional problemof estimating λ + λ by X + X given X ∈ B . Clearly when λ = λ = λ noestimator can match, much less beat the risk of X + X for this conditional problemsince X + X is a sufficient statistic, the loss is squared error, and X + X is anadmissible estimator of 2 λ . Thus δ ∗ ( X , X ) = V ( X )+ V ( X ) on the entire samplespace proving the theorem. A. Cohen and H. B. Sackrowitz
References [1]
Brown, L. D. and Farrell, R. H. (1985). Complete class theorems forestimation of multivariate Poisson means and related problems.
Ann. Statist. Johnstone, I. (1984). Admissibility, difference equations and recurrence inestimating a Poisson mean.
Ann. Statist. Robbins, H. (1988). The u, v method of estimation. In
Statistical DecisionTheory and Related Topics. IV (S. S. Gupta and J. O. Berger, eds.) 265–270.Springer, New York. MR0927106[4] Sackrowitz, H. B. and
Samuel-Cahn, E. (1984). Estimation of the meanof a selected negative exponential population.
J. R. Statist. Soc. Zhang, C. (2005). Estimation of sums of random variables: examples and in-formation bounds.
Ann. Statist.33