Non-Parametric Field Estimation using Randomly Deployed, Noisy, Binary Sensors
aa r X i v : . [ c s . I T ] D ec Non–Parametric Field Estimation with RandomlyDeployed, Noisy, Binary Sensors Ye Wang and Prakash IshwarDepartment of Electrical and Computer EngineeringBoston University, Boston, MA { yw,pi } @bu.edu Abstract — The reconstruction of a deterministic data fieldfrom binary–quantized noisy observations of sensors which arerandomly deployed over the field domain is studied. The studyfocuses on the extremes of lack of deterministic control in thesensor deployment, lack of knowledge of the noise distribution,and lack of sensing precision and reliability. Such adverseconditions are motivated by possible real–world scenarios wherea large collection of low–cost, crudely manufactured sensors aremass–deployed in an environment where little can be assumedabout the ambient noise. A simple estimator that reconstructs theentire data field from these unreliable, binary–quantized, noisyobservations is proposed. Technical conditions for the almostsure and integrated mean squared error (MSE) convergenceof the estimate to the data field, as the number of sensorstends to infinity, are derived and their implications are dis-cussed. For finite–dimensional, bounded–variation, and Sobolev–differentiable function classes, specific integrated MSE decayrates are derived. For the first and third function classes theserates are found to be minimax order optimal with respect toinfinite precision sensing and known noise distribution.
Keywords: nonparametric regression; Monte-Carlo sampling;dithered scalar quantization; minimax rate of convergence;almost sure convergence; oversampled analog-to-digital con-version; distributed source coding; sensor networks; scalinglaw; I. I
NTRODUCTION
In a recent paper [1] we considered the problem of re-constructing a bounded deterministic multidimensional datafield f : [0 , p → [ a, − a ] , < a < ∞ , from noisydithered binary–quantized observations collected by n sensorsrandomly deployed over the field domain. The random sen-sor deployment model was based on uniform Monte Carlosampling locations where n sensors are independently andidentically distributed (iid) uniformly over the field domain [0 , p . A simple estimator that reconstructs the entire data fieldfrom these unreliable, binary–quantized, noisy observationswas proposed in [1] and an upper bound on the integrated MSEof the estimator was derived. Using this bound, the integrated This material is based upon work supported by the US National ScienceFoundation (NSF) under award (CAREER) CCF–0546598. Any opinions,findings, and conclusions or recommendations expressed in this material arethose of the authors and do not necessarily reflect the views of the NSF.A part of this work was presented at the 2007 International Symposium onInformation Theory (ISIT). The field domain [0 , p is used for clarity and ease of exposition.However, the results can be generalized to compact subsets of R p . MSE convergence of the estimator to the actual field as thenumber of sensors n −→ ∞ was established.In the present paper we expand and complete the devel-opment of results in [1]: (i) In Section III-B we expandthe results of [1] to general deployment distributions. Weestablish a general upper bound to the integrated MSE whichhighlights the interaction of the deployment distribution andthe orthonormal basis used for non-parametric field estimation(Theorem 3.1). (ii) We then derive sufficient conditions onthe deployment distribution, the orthonormal basis, and thedimension of the field estimate which ensure the asymptotic(as n −→ ∞ ) integrated MSE consistency of the proposedestimator. Implications for desirable deployment distributionsare also discussed. (iii) In Section III-C we comprehensivelyinvestigate the asymptotic (as n −→ ∞ ) almost sure consis-tency of the proposed estimator. The highlight of this sectionis Theorem 3.2 which provides an interesting set of sufficientconditions on the deployment distribution, the orthonormalbasis, and the dimension of the field estimate which ensuresasymptotic almost sure consistency of the estimation error. Theimplications of Theorem 3.2 are explored in detail throughProposition 3.1 and Corollary 3.2 and are of independentinterest.For the finite–dimensional, bounded–variation, andSobolev–differentiable function classes, explicit achievabledecay rates for the integrated MSEs are provided inSection IV. Specifically, for fields that belong to a finite–dimensional function space, the integrated MSE decays as O (1 /n ) (Corollary 4.1). For fields of bounded–variation, theintegrated MSE decays as O (1 / √ n ) (Corollary 4.2). Forfields that are s –Sobolev smooth (see IV-C), the integratedMSE decays as O ( n − s s +1 ) (Corollary 4.3).One of the highlights of this work is that for multidimen-sional fields living in rich function spaces, the minimax rate ofconvergence, of the integrated MSE, even with randomly de-ployed sensors, unknown noise statistics, and binary ditheredscalar quantization (a highly nonlinear operation), can matchthe minimax rate of convergence with infinite–precision real–valued samples and known noise statistics.The application context of this work is distributed sensingand coding for field reconstruction in wireless sensor networksas in [1]. The focus is on the extremes of lack of control in Landau’s asymptotic notation: f ( n ) = O ( g ( n )) ⇔ lim sup n →∞ | f ( n ) /g ( n ) | < ∞ ; f ( n ) = Ω( g ( n )) ⇔ g ( n ) = O ( f ( n )) ; f ( n ) = Θ( g ( n )) ⇔ f ( n ) = O ( g ( n )) and g ( n ) = O ( f ( n )) . the sensor deployment, arbitrariness and lack of knowledgeof the noise distribution, and low–precision and unreliabilityin the sensors. These adverse conditions are motivated bypossible real–world scenarios where a large collection of low–cost, crudely manufactured sensors are mass–deployed in anenvironment where little can be assumed about the ambientnoise. Each sensor measures a noisy sample of the field atits location under iid zero–mean, bounded amplitude, additivenoise. The statistical distribution of the noise is unknown tothe sensors and the fusion center, and the results in this paperhold for arbitrary distributions satisfying these assumptions.Each noisy sensor sample is quantized to a binary value bycomparison with a random threshold ( –bit dithered scalarquantization). The binary–quantization models the extremeof low–precision quantization. The random thresholds areassumed to be iid across the sensors and uniformly distributedover the sample dynamic range, modeling the extreme unrelia-bility in the quantization across sensors due to manufacturingprocess variations and environmental conditions at differentsensor locations. Such extreme modeling assumptions are con-sidered to demonstrate what is still achievable under adverseconditions.The communication channel issues are abstracted away byassuming that the underlying sensor communication networkis able to handle the modest payload of transmitting one bit(the binary–quantized observation) per sensor to the fusioncenter. The focus of this work is on reconstructing a singletime snapshot of the field at a fusion center. The recon-struction of multiple time snapshots of the field can also beaccommodated within the framework of this work as in [2]but is omitted for clarity. In fact, this can be achieved withtime–sharing sensors, vanishing per–sensor rate, and vanishingsensor location “overheads” (see [2]). It is also assumedthat the fusion center has access to the physical locations ofthe sensors and can correctly associate messages with theirpoints of origin. This may be justifiable by possible modelsfor the underlying wireless transmission where triangulationof sensors is inherently performed. The problem setup isillustrated in Figure 1.The available literature on distributed field estimation whichsimultaneously treats binary– sensing , random sensor deploy-ment , and unknown observation noise distribution is limited.The early works in [3], [4] consider the problem of recon-structing a signal from binary–quantized samples acquiredwith random thresholds, but do not consider arbitrary additivenoise with an unknown distribution and only consider fixeddeterministic sampling locations (deployment). The work in[5] is limited to the estimation of a constant field and doesnot explicitly address sampling precision (sensing) constraints.A recent work [2] provides pointwise MSE decay rates interms of the local and global modulus of field continuityby building upon the techniques in [3], [4], [5]. However,[2] does not consider random sensor deployment and re-quires local field continuity for pointwise MSE convergence.The present work incorporates random sensor deployment, Network overheads refer to additional bits of information that must beattached to each message to identify the point of origin of the message. binary–sensing, and unknown noise distribution while studyingalmost sure and integrated MSE convergence of the fieldestimate. The integrated MSE convergence for the bounded–variation, Sobolev–differentiable, and finite–dimensional func-tion classes are explored in detail. Our results expose theeffects of field “smoothness”, deployment randomness, andobservation/sensing noise on the integrated MSE scaling be-havior.For field estimation approaches which are not con-strained by finite sensing precision and sensing unreliability,such as those involving “uncoded” analog joint sampling–transmission, there is a growing body of literature now avail-able (e.g., see [6], [7], [8], [9], [10], [11], [12] and referencestherein). Related to the distributed field reconstruction problemis the so–called CEO problem studied in the InformationTheory community in which the distortion is averaged overmultiple field snapshots over time (e.g., see [13], [14], [15] andreferences therein). There is also a significant body of workon oversampled A–D conversion (e.g., see [16] and referencestherein), which is loosely related to the results of the presentwork concerning finite–dimensional fields. However, these aredifferent problem formulations and are not the focus of thepresent work.The rest of this paper is organized as follows. The problemformulation with detailed modeling assumptions are presentedin Section II. The core technical results are then summarizedand discussed in Section III. The core results are then used toderived explicit expressions of the decay rate of the integratedMSE for three rich function classes in Section IV. The proofsof all the core technical results are presented in Section V andconcluding remarks are made in Section VI.II. P
ROBLEM F ORMULATION
Field Model:
We model the field as a real–valued, bounded,deterministic function f : D → [ − a, + a ] belonging to a non–parametric function class F , that is, f ∈ F , where F isa set of measurable functions mapping D to [ − a, + a ] . Thedomain of the field D is assumed to be a compact subsetof R d , the d –dimensional Euclidean space. The objective isto reconstruct this function with high fidelity from binary–quantized noisy observations collected by a network of non–cooperative sensors that are randomly deployed over thedomain D . Random Sensor Deployment:
We assume that the n sen-sors are independently and identically randomly deployed overthe domain D according to a known distribution p X . If X i ∈ D denotes the location of the i th sensor for i ∈ { , . . . , n } , then X i ∼ iid p X captures the lack of control in sensor deployment.We assume that the support of p X is D and that p X is a non–singular distribution . Additive Noise:
Each sensor takes a sample of the fieldunder additive noise. The noisy samples are given by Y i = The number of parameters that specify a non–parametric function class isnot fixed a priori and is possibly infinite. The sensors do not exchange information or otherwise collaborate at thetime of or before taking measurements. A random variable with a non–singular distribution takes values in a subsetof D with Lebesgue measure with probability . Fig. 1.
Problem Setup:
Thefield (in a single snapshot) issampled by n sensors at their respective locations under additive noise. Each sample is unreliablyquantized toabinaryvaluebyacomparisonwitharandomthreshold.Thesebinaryvaluesaretransmitted tothefusioncenterwhichreconstructs thefield. f ( X i ) + Z i , for i ∈ { , . . . , n } , where the noise variables Z i ∼ iid p Z and are independent of the sensor locations. Weassume that each Z i is zero–mean and is bounded in amplitudeby a constant b > , that is, the support of p Z is containedin [ − b, + b ] . However, besides these assumed conditions, thedistribution p Z is unknown to either the sensors or the fusioncenter, and the results and methods of this paper hold for arbitrary noise distributions satisfying these conditions. Welet P Z denote the set of all noise distributions satisfying theseassumptions. Note that since both the field and the noise arebounded, the noisy samples are bounded: | Y i | ≤ c := a + b .We assume that the value of c is known. The values of a and b can remain unknown to the sensors and the fusion center. Unreliable, Binary Quantization:
We assume that in thesensor hardware frontend, the noisy sample is quantized by anunreliable, low–precision analog–to–digital converter. Specif-ically, we consider one–bit (binary), dithered, scalar quanti-zation implemented as a comparison to a random thresholdthat is uniformly distributed over the sample dynamic range [ − c, + c ] . The binary–quantized observations are given by B i = sgn( Y i − T i ) for i ∈ { , . . . , n } := ( +1 Y i > T i , − , Y i ≤ T i , = ( +1 f ( X i ) + Z i > T i , − , f ( X i ) + Z i ≤ T i , where T i ∼ iid Unif [ − c, + c ] are the uniform random thresh-olds. The thresholds are independent of the sensor locationsand the noise. The value of B i is finally the observation thatsensor i has access to. Transmission:
We abstract away communication channelissues and assume that the underlying communication net-work of the sensors is able to handle the modest payloadof transmitting one bit per sensor to the fusion center. Wealso assume that the fusion center has access to the physicallocations of the sensors and can correctly associate messageswith their points of origin. Thus, we assume that throughthis abstracted communications network, the sensor locationand quantized observation pairs { ( X i , B i ) } ni =1 are reliablymade available to the fusion center. The reconstruction ofmultiple time snapshots of the field with time–sharing sensors,vanishing per–sensor rate and sensor location “overheads” can also be accommodated within the framework of this work asin [2] but is omitted for clarity. Reconstruction and Distortion Criterion:
Given { ( X i , B i ) } ni =1 , the fusion center constructs the field estimate ˆ f X ,...,X n ,B ,...,B n : D → C . For notational convenience, theexplicit dependence on { ( X i , B i ) } ni =1 will be suppressed andthe estimator will simply be denoted by ˆ f n . The performancecriterion is the integrated MSE given by D ( f, ˆ f n ) := E h k f − ˆ f n k i = E (cid:20)Z D | f ( x ) − ˆ f n ( x ) | dx (cid:21) , where the expectation is taken with respect to the randomnoise, thresholds, and the sensor locations. The objective is todesign an estimator ˆ f n that minimizes the integrated MSE D .The problem setup is shown in Figure 1. Minimax Integrated MSE:
For a given field subclass F sub ⊂ F , of interest are the corresponding upper, lower,and minimax rates of convergence of the integrated MSE. Apositive sequence γ n is an upper rate of convergence if thereexists a constant C < ∞ and an estimator ˆ f ∗ n such that lim sup n −→∞ sup p Z ∈P Z sup f ∈F sub γ − n D ( f, ˆ f ∗ n ) ≤ C. A positive sequence γ n is a lower rate of convergence if thereexists a constant C > such that lim inf n −→∞ inf ˆ f n sup p Z ∈P Z sup f ∈F sub γ − n D ( f, ˆ f n ) ≥ C, where the inf ˆ f n is the infimum over all field estimators. Theupper rate represents the asymptotic worst–case performanceachieved by a given estimator. The lower rate representsa fundamental limit on the asymptotic performance of anyestimator. A positive sequence γ n that is both a lower rateand an upper rate of convergence is called the minimax rate ofconvergence and the corresponding estimator ˆ f ∗ n that achievesthe upper rate is called a minimax order optimal estimator.Note that showing D ( f, ˆ f ∗ n ) = O ( γ n ) for all f ∈ F sub and p Z ∈ P Z for a particular estimator ˆ f ∗ n is equivalent to showingthat ˆ f ∗ n achieves γ n as an upper rate of convergence of theintegrated MSE. If it can be further shown that D ( f, ˆ f n ) =Ω( γ n ) for a particular f ∈ F sub , a particular p Z ∈ P Z , and forall estimators ˆ f n , then γ n is the minimax rate of convergenceof the integrated MSE. III. M
AIN R ESULTS
In this section, we describe our proposed field estimator andanalyze its performance. We show that under suitable technicalconditions, the field estimate is asymptotically integrated MSEconsistent, that is, as n −→ ∞ , E [ k f − ˆ f n k ] −→ . We alsoshow that under suitable technical conditions, the field estimateis asymptotically almost sure consistent, that is, as n −→ ∞ ,almost surely ˆ f n −→ f pointwise almost everywhere on D .We also provide an upper bound to the integrated MSE whichis used in Section IV to derive achievable integrated MSEdecay rates for specific function classes. The proofs of alltheorems are presented in Section V.Let F denote the set of all bounded, measurable functions f : D → [ − a, + a ] . Note that F ⊆ L ( D ) . Let B = { φ j } ∞ j =0 ,with φ j : D → C , denote an indexed orthonormal (Schauder)basis (e.g. Fourier, wavelet, etc.) of L ( D ) . Any f ∈ F canbe decomposed as f L = ∞ X j =0 h f, φ j i φ j =: ∞ X j =0 α j φ j , (1)where α j := h f, φ j i denotes the coefficients (projections ontothe basis functions) of the expansion. The m –term approxima-tion of f with respect to an orthonormal basis B = { φ j } ∞ j =0 is given by f m := m − X j =0 h f, φ j i φ j . (2)The corresponding m –term approximation error is given by ε [ f, m, B ] := k f − f m k = ∞ X j = m |h f, φ j i| = ∞ X j = m | α j | , (3)which is a non–negative, non–increasing sequence of m thatconverges to zero for all f ∈ F [17, Chapter 9]. A. Proposed estimator
Our proposed estimator first estimates the first m coeffi-cients { α j } m − j =0 of (1) with respect to a given orthonormalbasis B , according to ˆ α j := cn n X i =1 φ ∗ j ( X i ) p X ( X i ) B i , (4)for j ∈ { , . . . , m − } . A general tunable field estimate isgiven by the m –term approximation, ˆ f n,m := m − X j =0 ˆ α j φ j , (5)where m is the tunable design parameter which can be chosento depend on n to optimize the rate of decay of the integratedMSE for specific function classes. The final field estimate isgiven by specifying m as a function of n , ˆ f n := m ( n ) − X j =0 ˆ α j φ j . (6)The specification of m ( n ) for specific function classes isdiscussed in Section IV. The dependence of m on n needs to satisfy certain conditions to ensure that the estimate isasymptotically consistent. These conditions are described inSection III-B and Section III-C. B. Integrated MSE upper bounds and convergence results
The following theorem, whose proof appears in Sec-tion V-A, upper bounds the integrated MSE as the sum of twoterms. The first term is due to the variance of the coefficientestimates. The second term is due to the bias caused by thefinite–term series approximation.
Theorem 3.1: (Integrated MSE Upper Bound)
Let F , P Z ,and p X be as given in Section II. Let ˆ f n,m be given by (4) and(5), where B = { φ j } ∞ j =0 is any orthonormal Schauder basisof L ( D ) . Then, ∀ f ∈ F and ∀ p Z ∈ P Z , the integrated MSEis upper bounded by D = E h k f − ˆ f n,m k i ≤ c n m − X j =0 Z D | φ j ( x ) | p X ( x ) dx + ε [ f, m, B ] , (7) where ε [ f, m, B ] , given by (3), is a non–negative, non–increasing sequence that converges to as m −→ ∞ . In light of Theorem 3.1 we now examine conditions on m ( n ) , B , and p X which ensure that the estimator is asymptot-ically consistent in the integrated MSE sense, that is D −→ as n −→ ∞ . The following corollary specifies conditions thatimmediately ensures that the integrated MSE converges to . Corollary 3.1: (Integrated MSE Convergence of the FieldEstimate)
Under the same setup of Theorem 3.1, if m ( n ) , B = { φ j } ∞ j =0 , and p X satisfy m ( n ) −→ ∞ , as n −→ ∞ , (8) n m ( n ) − X j =0 Z D | φ j ( x ) | p X ( x ) dx −→ , as n −→ ∞ , (9) then the estimate converges in the integrated MSE sense to thefield, that is, ∀ f ∈ F and ∀ p Z ∈ P Z , D −→ , as n −→ ∞ . Condition (8) is sufficient (and often necessary) to ensurethat ε [ f, m, B ] converges to . Condition (9) is equivalent tothe first term of the integrated MSE upper bound, given in(7), converging to . For some deployment distributions p X ,condition (9) may not be attainable for many orthonormalbases. For example, let the domain D = [0 , with thedeployment distribution p X ( x ) = 2 x over [0 , . Then for anyorthonormal basis in which φ ( x ) = 1 over [0 , , e.g., Fourier,Harr wavelets, Legendre polynomials, etc., the first term of thesummation in (9) is given by Z D | φ ( x ) | p X ( x ) dx = Z x dx = ∞ . Thus integrated MSE upper bound becomes useless. Thisimplies that in general the deployment distributions and or-thonormal bases have to be appropriately matched as a designconsideration in order to satisfy condition (9). However, condi-tion (9) is ensured for any orthonormal basis if the deploymentdistribution p X has a strictly positive infimum over D , that is, inf x ∈D p X ( x ) = ν > . (10) Sensor deployment distributions over compact domains whichare useful for high-resolution field reconstruction would satisfysuch a condition. Given (10), we have that m − X j =0 Z D | φ j ( x ) | p X ( x ) dx ≤ m − X j =0 Z D | φ j ( x ) | ν dx = m − X j =0 ν k φ j k = mν , and ∀ f ∈ F and ∀ p Z ∈ P Z , the corresponding integratedMSE upper bound becomes D = E h k f − ˆ f n,m k i ≤ c mnν + ε [ f, m, B ] . (11)The upper bound in (11) converges to as n −→ ∞ when m ( n ) satisfies the following two conditions m ( n ) −→ ∞ , as n −→ ∞ ,m ( n ) n −→ , as n −→ ∞ . C. Almost sure convergence results
In this subsection we establish sufficient conditions for thefield estimate to be asymptotically almost sure consistent,that is, as n −→ ∞ , almost surely ˆ f n −→ f pointwisealmost everywhere on D . First, we establish a key theoremthat gives sufficient conditions for the convergence of thepointwise errors of the estimate with respect to the truncatedapproximation of the field. The proof of this theorem appearsin Section V-B. Theorem 3.2: (Almost Sure Convergence of Estimate Er-rors)
Let p X be the deployment distribution described inSection II satisfying (10), B = { φ j } ∞ j =0 be an orthonormalSchauder basis of L ( D ) , ˆ f n,m be the field estimate given by(4) and (5), and f m be the m –term approximation to the fieldgiven by (2). Let S n,m ( x ) := ˆ f n,m ( x ) − f m ( x ) , for all x ∈ D . If there exists a non–negative, increasingsequence of real numbers { Λ m } ∞ m =1 , and a non–negative,increasing sequence of positive integers { m ( n ) } ∞ n =1 whichsatisfy the following three conditions ∀ x, y a . e . ∈ D , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 φ j ( x ) φ ∗ j ( y ) 1 p X ( y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C Λ m , (12) ∀ f ∈ F , ∀ x a . e . ∈ D , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 h f, φ j i φ j ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C Λ m , (13) ∀ ǫ > , ∞ X n =1 exp − ǫ n Λ m ( n ) ! < ∞ , (14) where C , C > are some constants, then ∀ f ∈ F and ∀ x ∈D except on a set of Lebesgue measure zero, as n −→ ∞ ,almost surely, S n ( x ) := S n,m ( n ) −→ . Conditions (12) and (13) impose constraints on the basisfunctions B = { φ j } ∞ j =0 and the deployment distribution p X .Condition (14) implies that as n −→ ∞ , Λ m ( n ) /n −→ .This places a constraint on how fast m ( n ) can go to infinity.In particular it requires that in relation to Λ m , m ( n ) not growtoo fast with n .We now examine some special choices of { Λ m } ∞ m =1 forwhich conditions (12) and (13) will hold. For m ∈ { , , . . . } ,define auxiliary functions: g m ( x, y ) := 1Λ m m − X j =0 φ j ( x ) φ ∗ j ( y ) 1 p X ( y ) , (15) h m ( x ) := 1Λ m m − X j =0 h f, φ j i φ j ( x ) , (16)for x, y ∈ D . The following proposition, whose proof appearsin Section V-C, gives two sets of conditions on { Λ m } ∞ m =1 , B = { φ j } ∞ j =0 , and p X , for which conditions (12) and (13)will hold. Proposition 3.1:
Let { Λ m } ∞ m =1 be as in Theorem 3.2 and g m , h m be given by (15) and (16) respectively. (i) If m Λ m −→ , as m −→ ∞ , (17) and for x, y ∈ D almost everywhere, the limits g ∞ ( x, y ) := lim m −→∞ g m ( x, y ) , (18) h ∞ ( x ) := lim m −→∞ h m ( x ) (19) exist, then the limits are zero almost everywhere and theconditions (12) and (13) are satisfied for some constants C , C > . (ii) If the basis functions are uniformly amplitude bounded,that is, ∀ j ∈ { , , . . . } and ∀ x ∈ D , | φ j ( x ) | ≤ β < ∞ , then conditions (12) and (13) are satisfied for Λ m = m with constants C = β /ν and C = aβ p vol( D ) . Part (i) of Proposition 3.1 shows that if the limits of theauxiliary functions (15) and (16) as m −→ ∞ exist, thenfor any Λ m such that (17) is satisfied, e.g., Λ m = m γ/ ,for any γ > , conditions (12) and (13) are satisfied forsome constants. Part (ii) of Proposition 3.1 shows that if thebasis functions are uniformly bounded as, for example, in theorthonormal Fourier and Legendre bases, then conditions (12)and (13) are satisfied for Λ m = m and given constants.We now examine conditions on { Λ m } ∞ m =1 under which (14)will be satisfied. According to Ermakoff’s series convergencetest [18], if for some non–negative, non–increasing, real func-tion q ( t ) , t ≥ , lim t −→∞ e t q ( e t ) q ( t ) < , where e is the base of the natural logarithm, then ∞ X n =1 q ( n ) < ∞ . Let q ( t ) = exp (cid:0) − ǫ tt ψ (cid:1) , t ≥ , where ψ ∈ (0 , and ǫ > .Then e t q ( e t ) q ( t ) = e t exp( − ǫ e t e tψ )exp( − ǫ tt ψ ) = exp (cid:0) t − ǫ e t − tψ − ǫ t − ψ (cid:1) −→ , as t −→ ∞ . By Ermakoff’s test, for all ψ ∈ (0 , and all ǫ > , ∞ X n =1 exp (cid:18) − ǫ nn ψ (cid:19) < ∞ . Thus condition (14) will be satisfied if Λ m ( n ) = n ψ for any ψ ∈ (0 , .Combining the above result with Proposition 3.1 yieldspossible forms of the design parameters { m ( n ) } ∞ n =1 and { Λ m } ∞ m =1 such that the conditions for almost sure conver-gence (12), (13), and (14) are all simultaneously satisfied.Choosing m ( n ) = Θ( n ψ ) , where ψ ∈ (0 , , and Λ m = m γ/ ,for some γ ∈ (1 , /ψ ) , yields Λ m ( n ) = n ψ ′ , where ψ ′ = γψ ∈ ( ψ, , which satisfies (14) and (17) simultaneously.With these choices, Proposition 3.1 shows that conditions(12) and (13) will be satisfied as well if the limits (18)and (19) of the auxiliary functions (15) and (16) respectivelycan be assumed to exist. Thus, for any m ( n ) of the form m ( n ) = Θ( n ψ ) , where ψ ∈ (0 , , we can choose { Λ m } ∞ m =1 such that conditions (12), (13), and (14) are simultaneouslysatisfied, if the limits (18) and (19) exist.Due to the properties of an orthonormal basis, as m −→ ∞ ,the m –term approximation, f m given by (2), converges in L –norm to f for any f ∈ F . Although, it is not guaranteedthat for general orthonormal bases f m will converge pointwisealmost everywhere to a specific function. However, if f m doesconverge almost everywhere to some f ∞ , then f ∞ must beequal to f almost everywhere. This can be seen by writing ≤ Z D | f ( x ) − f ∞ ( x ) | dx = Z D lim inf m −→∞ | f ( x ) − f m ( x ) | dx ≤ lim inf m −→∞ Z D | f ( x ) − f m ( x ) | dx = 0 , where the inequality follows due to Fatou’s lemma [19]. Thus R D | f ( x ) − f ∞ ( x ) | dx = 0 , so | f ( x ) − f ∞ ( x ) | = 0 for x ∈ D almost everywhere. For example, it is well known that for any f ∈ F ⊂ L ([0 , the m –term Fourier series approximation, f m converges to f almost everywhere [20]. Corollary 3.2: (Almost Sure Convergence of the FieldEstimate)
Within the context of Theorem 3.2, if conditions(12), (13), and (14) hold and if as m −→ ∞ , the m –term approximation f m converges almost everywhere to somefunction f ∞ , then ∀ x a . e . ∈ D , the pointwise error of the fieldestimate satisfies | ˆ f n ( x ) − f ( x ) | ≤ | ˆ f n ( x ) − f m ( n ) ( x ) | + | f m ( n ) ( x ) − f ( x ) | = | S n ( x ) | + | f m ( n ) ( x ) − f ( x ) | a . s . −−→ , as n −→ ∞ . Thus, for x ∈ D almost everywhere, as n −→ ∞ , almostsurely ˆ f n ( x ) −→ f ( x ) . IV. A
CHIEVABLE I NTEGRATED
MSE D
ECAY R ATES
In this section, we use the integrated MSE upper bound(7) derive explicit expressions for the achievable upper ratesof convergence of the integrated MSE for three specificfunction classes, namely, finite–dimensional F B k , bounded–variation F BV , and s –Sobolev differentiable F s . Throughoutthis section, we assume that (10) holds.The general approach for deriving such rates of convergencefor functions living in a function class F sub ⊆ F is select anappropriate basis B = { φ j } ∞ j =0 in which the m –term approxi-mation error given by ε [ f, m, B ] in (3) can be upper boundedby an explicit function of m for all f ∈ F sub . Then m in (7)can be chosen to depend on n to optimize the convergence rate.Thus given the appropriate function approximation theoreticresults that upper bound ε [ f, m, B ] , this approach establishesachievable upper rates of convergence of the integrated MSEfor the corresponding function class. A. Functions in a finite–dimensional subspace of F The first function class represents the scenario where thefusion center has an exact prior knowledge of the finite–dimensional space in which the function lives. Let F B k denotethe subset of F that is composed of functions that are linearcombinations of a given set of k orthonormal functions B k = { φ j } k − j =0 . Note that for any f ∈ F B k , f = P k − j =0 h f, φ j i φ j .Thus the function approximation at the truncation point m = k is exact, that is, f m = f for m = k so that, ∀ f ∈ F B k , ε [ f, k, B k ] = 0 . (20)Combining (7) with (20) yields the following corollary. Corollary 4.1: (Decay rate of integrated MSE for F B k ) Let B k and F B k be as given above and P Z and p X be as givenin Section II. Let ˆ f n,m be given by (4) and (5) with B k as thebasis. If p X satisfies (10), then ∀ f ∈ F B k and ∀ p Z ∈ P Z , theintegrated MSE of ˆ f n,m with the truncation point m set to k is upper bounded as follows D = E h k f − ˆ f n,m k i ≤ c knν = O (cid:18) n (cid:19) . Therefore, ∀ f ∈ F B k and ∀ p Z ∈ P Z , an achievable upperrate of convergence of the integrated MSE for fields in a finite–dimensional subspace is given by D = E h k f − ˆ f n,m k i = O (cid:18) n (cid:19) . It should be noted that for this function class, the fieldestimation problem for integrated MSE is equivalent to afinite–dimensional parameter estimation problem with condi-tionally independent noisy observations. Under the choice ofan appropriate, well–behaved noise distribution p Z ∈ P Z ,the Cram´er–Rao lower bound for the integrated MSE decayrate for finite–dimensional parameter estimation from iid noisyobservations asymptotically behaves as D = Ω(1 /n ) forall asymptotically integrated MSE consistent estimators [21].Hence the estimator is minimax order optimal for F B k andachieves the minimax rate of convergence γ n = (1 /n ) . A noise distribution is chosen such that the observation model satisfies theCram´er–Rao regularity conditions [21].
B. Functions of bounded–variation on Domain D = [0 , Let F BV denote the subset of F which is composed offunctions on D = [0 , of bounded–variation. Formally, F BV := ( f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) lim δ → Z | f ( x ) − f ( x − δ ) || δ | dx < + ∞ ) . A function in F BV has a derivative (at points for whichit exists) which is uniformly bounded and the sum of theamplitudes of its discontinuous jumps is finite. The bounded–variation condition represents a minimal “smoothness” as-sumption since a restriction is placed on the amount of totaldiscontinuous jumps.It is well known that for the Fourier basis, B Fourier = ( φ j ( x ) = ( e + πjx √− , j even ,e − π ( j +1) x √− , j odd ) ∞ j =0 , (21)the m –term approximation error (3) is upper bounded asfollows, ∀ f ∈ F BV , ε [ f, m, B Fourier ] ≤ σm , (22)where σ > is a constant [17, Chapter 9]. Combining (7)with (22) yields the following corollary. Corollary 4.2: (Decay rate of integrated MSE for F BV ) Let F BV be as given above and P Z and p X be as given inSection II. Let ˆ f n,m be given by (4) and (5) with B Fourier as given in (21). If p X satisfies (10), then ∀ f ∈ F BV and ∀ p Z ∈ P Z , the integrated MSE of ˆ f n,m is upper bounded asfollows D = E h k f − ˆ f n,m k i ≤ c mnν + σm , where σ > is a constant. Setting m ( n ) = √ n to optimize thedecay rate of the upper bound yields the following achievableupper rate of convergence of the integrated MSE ∀ f ∈ F BV and ∀ p Z ∈ P Z : D = E h k f − ˆ f n,m k i = O (cid:18) √ n (cid:19) . C. Sobolev differentiable functions on Domain D = [0 , This function class includes functions which are differen-tiable in a generalized sense to a degree of differentiabilityparameterized by s which can take non–integer values. Thevalue of s can be considered as a measure of smoothness. For s > / , let F s denote the subset of F which is composed offunctions on D = [0 , that are s –times Sobolev differentiable.Formally, F s := ( f ∈ F (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Z + ∞−∞ | ω | s | ˜ f ( ω ) | dω < + ∞ ) , (23)where ˜ f ( ω ) denotes the Fourier transform of f . Note thatthe condition in (23) (for integer values of s ) corresponds tothe s th derivative of f belonging to L ([0 , . Thus, this setincludes functions that are ⌊ s ⌋ –times differentiable.It is well known that for s > / , ∀ f ∈ F s , ε [ f, m, B Fourier ] ≤ σm s , (24) where σ > is a constant [17, Chapter 9]. Combining (7)with (24) yields the following corollary. Corollary 4.3: (Decay rate of integrated MSE for F s ) Let F s be as given above and P Z and p X be as given in Section II.Let ˆ f n,m be given by (4) and (5) with B Fourier as given in(21). If p X satisfies (10), then ∀ f ∈ F s and ∀ p Z ∈ P Z , theintegrated MSE of ˆ f n,m is upper bounded as follows D = E h k f − ˆ f n,m k i ≤ c mnν + σm s , where σ > is a constant. Setting m ( n ) = n s +1 tooptimize the decay rate of the upper bound yields the followingachievable upper rate of convergence of the integrated MSE ∀ f ∈ F s and ∀ p Z ∈ P Z : D = E h k f − ˆ f n,m k i = O (cid:16) n − s s +1 (cid:17) . It is well known that the exact minimax rate of convergenceof the integrated MSE for non–parametric regression, basedon full–resolution, real–valued, noisy observations in an s –Sobolev space is given by γ n = n − s s +1 [22], [23]. In non–parametric regression, the field estimate is based directly onthe full–resolution real–valued noisy observations { Y i } ni =1 ,whereas in our problem the field estimate is based on onlythe binary–quantized observations { B i } ni =1 . In both setups,the corresponding sensor locations are known. Thus, it isinterested to observe that our proposed estimator is minimaxorder optimal even with respect to the case in which theobservations have not been quantized.V. P ROOFS
A. Proof of Theorem 3.1
We first establish some results regarding the estimatedcoefficients of (4).
Lemma 5.1:
The expected value of an approximated coef-ficient is given by (i) E [ˆ α j ] = α j = h f, φ j i , (25) and the integrated MSE of the coefficient estimates satisfies(ii) E [ | ˆ α j − α j | ] ≤ c n Z D | φ j ( x ) | p X ( x ) dx. (26) The approximated coefficients also have the following conver-gence property (iii) ˆ α j a . s . −−→ α j , n −→ ∞ . (27) Proof: (i) The expectation of the coefficient estimatescan be evaluated as follows E [ˆ α j ] = E " cn n X i =1 φ ∗ j ( X i ) p X ( X i ) B i = cn n X i =1 E (cid:20) φ ∗ j ( X i ) p X ( X i ) sgn( f ( X i ) + Z i − T i ) (cid:21) = c E (cid:20) φ ∗ j ( X ) p X ( X ) sgn( f ( X ) + Z − T ) (cid:21) , (28) where the last equality follows since the terms are iid. Thislast expectation can be evaluated as follows E (cid:20) φ ∗ j ( X ) p X ( X ) sgn( f ( X ) + Z − T ) (cid:21) = Z D p X ( x ) Z + b − b p Z ( z ) Z + c − c c φ ∗ j ( x ) p X ( x ) sgn( f ( x ) + z − t ) dtdzdx = Z D Z + b − b p Z ( z ) φ ∗ j ( x ) 12 c Z f ( x )+ z − c dt − Z + cf ( x )+ z dt ! dzdx = 1 c Z D Z + b − b p Z ( z ) φ ∗ j ( x )( f ( x ) + z ) dzdx = 1 c Z D φ ∗ j ( x ) f ( x ) dx = 1 c h f, φ j i = α j c , (29)where the second to last line follows from the assumption that p Z is a zero–mean distribution. Combining (28) and (29), wehave (25). (ii) Thus, E [ | ˆ α j − α j | ] = E [ | ˆ α j − E [ˆ α j ] | ] = Var[ˆ α j ] . (30)Using standard properties of variance and the fact that theterms { φ ∗ j ( X i ) B i /p X ( X i ) } ni =1 are iid, we obtain the follow-ing Var[ˆ α j ] = c n n X i =1 Var (cid:20) φ ∗ j ( X i ) p X ( X i ) B i (cid:21) = c n Var (cid:20) φ ∗ j ( X ) p X ( X ) B (cid:21) = c n E "(cid:12)(cid:12)(cid:12)(cid:12) φ ∗ j ( X ) p X ( X ) B (cid:12)(cid:12)(cid:12)(cid:12) − c n (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20) φ ∗ j ( X ) p X ( X ) B (cid:21)(cid:12)(cid:12)(cid:12)(cid:12) ≤ c n E "(cid:12)(cid:12)(cid:12)(cid:12) φ ∗ j ( X ) p X ( X ) (cid:12)(cid:12)(cid:12)(cid:12) , = c n Z D | φ j ( x ) | p X ( x ) p X ( x ) dx, = c n Z D | φ j ( x ) | p X ( x ) dx. (31)Combining (30) and (31), we arrive at (26). (iii) The coefficientestimates ˆ α j = cn n X i =1 φ ∗ j ( X i ) p X ( X i ) B i = cn n X i =1 φ ∗ j ( X i ) p X ( X i ) sgn( f ( X i ) + Z i − T i ) a . s . −−→ c E (cid:20) φ ∗ j ( X ) p X ( X ) sgn( f ( X ) + Z − T ) (cid:21) , (32)as n −→ ∞ by Kolmogorov’s strong law of large numberssince each term in the summation is iid and has a first momentbounded by p vol( D ) : E (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) φ j ( X ) p X ( X ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) = k φ j k ≤ p vol( D ) k φ j k = p vol( D ) , where the last inequality follows from the Cauchy-Schwartzinequality. Combining (29) and (32), we obtain (27), conclud-ing the proof of the Lemma 5.1.For any orthonormal basis B = { φ j } ∞ j =0 and for any field f ∈ F , the integrated MSE of the estimate can be written asfollows D = E [ k f − ˆ f n,m k ]= E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ X j =0 α j φ j − m − X j =0 ˆ α j φ j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = m − X j =0 E [ | ˆ α j − α j | ] + ∞ X j = m | α j | ≤ c n m − X j =0 Z D | φ j ( x ) | p X ( x ) dx + ∞ X j = m | α j | | {z } = ε [ f,m, B ] , (33)where in the last step we used the bound given in (26). Thuswe have (7), concluding the proof of Theorem 3.1. B. Proof of Theorem 3.2
The pointwise errors of the field estimate with respect tothe m –term approximation can be written as S n ( x ) := ˆ f n ( x ) − f m ( n ) ( x )= m ( n ) − X j =0 n n X i =1 cφ ∗ j ( X i ) B i p X ( X i ) − α j ! φ j ( x )= 1 n n X i =1 m ( n ) − X j =0 (cid:18) cφ ∗ j ( X i ) B i p X ( X i ) − α j (cid:19) φ j ( x )= 1 n n X i =1 U i ( x ) , where for i ∈ { , . . . , n } , U i ( x ) := m ( n ) − X j =0 (cid:18) cφ ∗ j ( X i ) B i p X ( X i ) − α j (cid:19) φ j ( x ) . Note that U i ( x ) is iid and that it is zero–mean due to (25)of Lemma 5.1. However, almost sure convergence of S n ( x ) cannot be directly deduced from the standard strong law oflarge numbers since the distribution of U i ( x ) itself dependson n because it is the summation of m ( n ) terms. Instead,we leverage a more fundamental condition for almost sureconvergence [24, p. 206]: if for all ǫ > , ∞ X n =1 P [ | S n ( x ) | ≥ ǫ ] < ∞ , then S n ( x ) a . s . −−→ as n −→ ∞ .Associated with S n ( x ) , is a martingale { V k ( x ) } nk =0 givenby V := 0 , and for k ∈ { , . . . , n } , V k ( x ) := k X i =1 n U i ( x ) . V ( x ) , . . . , V n ( x ) is a martingale since { U i ( x ) } ni =1 is iid withzero–mean. Note that V n ( x ) = S n ( x ) and that | V k ( x ) − V k − ( x ) | ≤ | n U k ( x ) | . For each i ∈ { , . . . , n } , | U i ( x ) | ≤ c (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m ( n ) − X j =0 φ j ( x ) φ ∗ j ( X i ) p X ( X i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m ( n ) − X j =0 α j φ j ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , by the triangle inequality. Under the assumptions that theconditions given by (12) and (13) hold and that the deploymentdistribution p X is non–singular, there exists some constant C > such that for all i ∈ { , . . . , n } , | U i ( x ) | ≤ C Λ m ( n ) , with probability for x ∈ D almost everywhere. Thus | V k ( x ) − V k − ( x ) | ≤ C Λ m ( n ) n . (34)According to the Azuma–Hoeffding inequality (see [25,p. 303]), if for all k ∈ { , . . . , n } , | V k ( x ) − V k − ( x ) | ≤ C k ,then for all ǫ > , P [ | V n ( x ) | ≥ ǫ ] ≤ (cid:18) − ǫ P nk =1 C k (cid:19) . Applying this inequality with C k = C Λ m ( n ) /n for all k ∈{ , . . . , n } (see (34)) and V n ( x ) = S n ( x ) we obtain thefollowing upper bound P [ | S n ( x ) | ≥ ǫ ] ≤ − ǫ P nk =1 C Λ m ( n ) n = 2 exp − ǫ n C Λ m ( n ) ! , for x ∈ D almost everywhere. Therefore, ∞ X n =1 P [ | S n ( x ) | ≥ ǫ ] ≤ ∞ X n =1 − ǫ n C Λ m ( n ) ! , which is less than infinity ∀ ǫ > and x ∈ D almosteverywhere, due to condition (14). Thus, as n −→ ∞ , almostsurely S n ( x ) −→ , for x ∈ D almost everywhere. C. Proof of Proposition 3.1
Part (i): Note that if | g ∞ ( x, y ) | = 0 for x, y ∈ D almosteverywhere and | h ∞ ( x ) | = 0 for all f ∈ F and for x ∈ D almost everywhere, then conditions (12) and (13) hold with some constants C , C > . For g m , we can write Z Z
D×D p X ( y ) | g ∞ ( x, y ) | dxdy == Z Z
D×D lim inf m −→∞ p X ( y ) | g m ( x, y ) | dxdy (a) ≤ lim inf m −→∞ Z Z
D×D p X ( y ) | g m ( x, y ) | dxdy = lim inf m −→∞ Z Z
D×D p X ( y ) 1Λ m m − X j =0 m − X k =0 φ j ( x ) φ ∗ j ( y ) · φ ∗ k ( x ) φ k ( y ) 1 p X ( y ) dxdy = lim inf m −→∞ m m − X j =0 m − X k =0 Z D φ j ( x ) φ ∗ k ( x ) dx | {z } = δ j − k · Z D φ k ( y ) φ ∗ j ( y ) dy | {z } = δ j − k = lim inf m −→∞ m m − X j =0 m −→∞ m Λ m , where the inequality (a) is due to Fatou’s lemma [19] and δ k is the Kronecker delta function. Thus for Λ m such that (17)is satisfied we have that Z Z
D×D p X ( y ) | g ∞ ( x, y ) | dxdy = 0 , which implies that | g ∞ ( x, y ) | = 0 for x, y ∈ D almosteverywhere due to (10). For h m , we can write Z D | h ∞ ( x ) | dx = Z D lim inf m −→∞ | h m ( x ) | dx ≤ lim inf m −→∞ Z D | h m ( x ) | dx = lim inf m −→∞ Z D m m − X j =0 m − X k =0 α j α ∗ k φ j ( x ) φ ∗ k ( x ) dx = lim inf m −→∞ m m − X j =0 m − X k =0 α j α ∗ k Z D φ j ( x ) φ ∗ k ( x ) dx | {z } = δ j − k = lim inf m −→∞ m m − X j =0 | α j | ≤ lim inf m −→∞ k f k Λ m ≤ lim inf m −→∞ a vol( D )Λ m , ∀ f ∈ F , where the first inequality follows from Fatou’s lemma [19] andthe last inequality is due to f being amplitude–bounded by a over the support D . Thus for Λ m such that (17) is satisfied,we have that ∀ f ∈ F , Z D | h ∞ ( x ) | dx = 0 , which implies that | h ∞ ( x ) | = 0 for all f ∈ F and for x ∈ D almost everywhere.Part (ii): Applying the triangle inequality, we can write (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 φ j ( x ) φ ∗ j ( y ) 1 p X ( y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ m − X j =0 | φ j ( x ) || φ ∗ j ( y ) | p X ( y ) ≤ m − X j =0 β ν = β ν m = C Λ m , which shows that condition (12) is satisfied for Λ m = m and C = β /ν . Again, applying the triangle inequality, we canwrite (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m − X j =0 h f, φ j i φ j ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ m − X j =0 |h f, φ j i|| φ j ( x ) |≤ m − X j =0 k f k β = m k f k β ≤ maβ p vol( D ) = C Λ m , which shows that condition (13) is satisfied for Λ m = m and C = aβ p vol( D ) .VI. C ONCLUDING R EMARKS
The principal contribution of this work is a systematictreatment of (i) binary– sensing , (ii) random sensor deployment ,and (iii) unknown observation noise distribution for high–resolution distributed sensing and estimation of multidimen-sional fields using dense sensor networks. A key finding of thiswork is that the rate of convergence of the integrated MSE forfield estimation is extremely robust to the apparent limitationsof ultra–poor sensing precision, random sensor deployment,and lack of knowledge of observation noise statistics. In somecases, the convergence rate exactly matches the minimax rateof convergence with infinite–precision real–valued samplesand known noise statistics. Interesting directions for futurework include (i) establishing the exact rate of convergenceof the integrated MSE and a central limit theorem for theestimate, (ii) analysis of the sensitivity of the integrated–MSE to sensor location uncertainty, (iii) unbounded–amplitudesignal and noise models, and (iv) general dither distributions.A
CKNOWLEDGMENT
The authors would like to thank Professor Elias Masry,Department of Electrical and Computer Engineering at theUniversity of California San Diego, for his encouraging com-ments. R
EFERENCES[1] Y. Wang and P. Ishwar, “On non-parametric field estimation usingrandomly deployed, noisy, binary sensors,” in
Proc. IEEE InternationalSymposium on Information Theory , Nice, France, Jun. 2007.[2] Y. Wang, N. Ma, M. Zhao, P. Ishwar, and V. Saligrama, “On UniversalDistributed Estimation of Noisy Fields with One–bit Sensors,” in
Proc.Allerton Conference on Communications, Control, and Computing , Sep.2006.[3] E. Masry, “The reconstruction of analog signals from the sign of theirnoisy samples,”
IEEE Trans. Info. Theory , vol. IT–27, pp. 735–745, Nov.1981.[4] E. Masry and S. Cambanis, “Consistent estimation of continuous–time signals from nonlinear transformations of noisy samples,”
IEEETrans. Info. Theory , vol. IT–27, pp. 84–96, Jan. 1981.[5] Z. Q. Luo, “Universal decentralized estimation in a bandwidth con-strained sensor network,”
IEEE Trans. Info. Theory , vol. IT–51, pp.2210–2219, Jun. 2005.[6] M. Gastpar and M. Vetterli, “Source–channel communication in sensornetworks,”
Lecture Notes in Computer Science , vol. 2634, pp. 162–177,Apr. 2003.[7] M. Gastpar, B. Rimoldi, and M. Vetterli, “To Code, or not to code: Lossysource–channel communication revisited,”
IEEE Trans. Info. Theory ,vol. IT–49, pp. 1147–1158, May 2003.[8] R. Nowak, U. Mitra, and R. Willet, “Estimating inhomogenous fieldsusing wireless sensor networks,”
IEEE J. Sel. Areas Commun. , vol. 22,no. 6, pp. 999–1006, Aug. 2004.[9] K. Liu, H. El–Gamal, and A. Sayeed, “On optimal parametric fieldestimation in sensor networks,” in
Proc. IEEE/SP 13th Workshop onStatistical Signal Processing , Jul. 2005, pp. 1170–1175.[10] W. Bajwa, A. Sayeed, and R. Nowak, “Matched source–channel commu-nication for field estimation in wireless sensor networks,” in
Proc. FourthInttl. Symposium Information Processing in Sensor Networks , Apr. 2005,pp. 332–339.[11] N. Liu and S. Ulukus, “Optimal distortion–power tradeoffs in Gaussiansensor networks,” in
Proc. IEEE International Symposium on Informa-tion Theory , Seattle, WA, USA, Jul. 2006, pp. 1534–1538.[12] M. Gastpar, “Uncoded transmission is exactly optimal for a simpleGaussian sensor network,” in
Proc. Information Theory and ApplicationsWorkshop , Jan. 29–Feb. 2, 2007.[13] T. Berger, Z. Zhang, and H. Viswanathan, “The CEO problem [mul-titerminal source coding],”
IEEE Trans. Info. Theory , vol. IT–42, pp.887–902, May. 1996.[14] H. Viswanathan and T. Berger, “The quadratic Gaussian CEO problem,”
IEEE Trans. Info. Theory , vol. IT–43, pp. 1549–1559, Sep. 1997.[15] V. Prabhakaran, D. Tse, and K. Ramchandran, “Rate-region of thequadratic Gaussian CEO problem,” in
Proc. IEEE International Sym-posium on Information Theory , Chicago, IL, Jun. 2004, p. 119.[16] Z. Cvetkovi´c, “Resilience properties of redundant expansions underadditive noise and quantization,”
IEEE Trans. Info. Theory , vol. IT–49,pp. 644–656, Mar. 2003.[17] S. Mallat,
A Wavelet Tour of Signal Processing . Academic Press, 1998.[18] K. Knopp,
Theory and Application of Infinite Series . Dover Publica-tions, 1990.[19] W. Rudin,
Real and Complex Analysis , 2nd ed. McGraw-Hill, 1974.[20] L. Carleson, “On convergence and growth of partial sums of Fourierseries,”
Acta Mathematica , vol. 116, no. 1, pp. 135–157, Dec. 1966.[21] S. M. Kay,
Fundamentals of Statistical Signal Processing, Volume I:Estimation Theory , 1st ed. Upper Saddle River, NJ: Prentice–Hall,1993.[22] A. P. Korostelev and A. B. Tsybakov,
Minimax Theory of ImageReconstruction . Springer Verlag, New York, 1993.[23] G. Golubev and M. Nussbaum, “A Risk Bound in Sobolev ClassRegression,”
The Annals of Statistics , vol. 18, no. 2, pp. 758–778, 1990.[24] R. Ash,
Basic Probability Theory . John Wiley and Sons, 1970.[25] M. Mitzenmacher and E. Upfal,