Optimal Measurement Times for a Small Number of Measures of a Brownian Motion over a Finite Period
Alexandre Aksenov, Pierre-Olivier Amblard, Olivier Michel, Christian Jutten
IIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 1
Optimal Measurement Times for a Small Numberof Measures of a Brownian Motion over a FinitePeriod
Alexandre Aksenov, Pierre-Olivier Amblard, Olivier Michel, Christian Jutten
Abstract —The measure timetable plays a critical role for theaccuracy of the estimator. This article deals with the optimizationof the schedule of measures for observing a random process intime using a Kalman filter, when the length of the process isfinite and fixed, and a fixed number of measures are available.The measuring devices are allowed to differ. The mean varianceof the estimator is chosen as criterion for optimality. The casesof or measures are studied in detail, and analytical formulasare provided. Index Terms —Random walk, Wiener process, Kalman filter,Multimodality, Optimal Sampling.
I. I
NTRODUCTION W HEN a latent phenomenon is observed through dif-ferent acquisition methods, more information can beacquired than from a single method, but making the mostof these measurements is a challenge [8], [10], [5]. This isdue to discrepancies in the nature of data, in particular inthe sampling. The observer often cannot control the instantsof measure and makes regular measures with each of theavailable sensors. In this case, controlling the delays betweenmeasurements with different sensors can lead to a consequentgain in the quality of the estimator [3]. One may also ask: whatis the optimal timetable of measurements when the devices areof different quality? This problem is explored in several recentpapers.
A. Previous work
Different models of the observed process and of the sensorsas well as different optimization criteria have been explored.Models of the observed process of infinite duration havebeen consiedered [4], [3]. In this case, the mean covariance ofthe estimator over a long period of observation is minimized.In other terms, the optimization criterion only takes the steady-state performance of a periodic schedule into account. Amodel in contiuous time is explored in [3], while the timeis discretized in the model of [4]. Another notable differencebetween the two models lies in that a measure is performedat every moment of the discrete time in the text [4]. Asopposed to optimizing the steady-state performance [4], [3],local optimization is performed in the setting considered in[9]. The resulting schedule is proved to be ultimately periodic,which is an a priori assumption in [4], [3].
This work has been partly supported by the European project ERC-2012-AdG-320684-CHESS.
When the process has a finite duration, the steady-state isnot achieved (e.g., [11]). Optimizing the performance over afinite time interval is to be considered [6], [13]. The optimalperiodic schedule in a model with discrete time is sought in[13] with respect to the performance over a finite time interval.It is supposed that the interval is long enough with respect tothe measurement period. No additional assumptions regardingthe number of measurements or the duration of the process(which is supposed to be finite) are made in the seminal work[6]. A model, where sensors are active during an interval oftime, is considered. The length of the interval of activation isa result of a tradeoff between the quality of estimation andthe cost (per unit time) of using a measurement device. Theoptimal solution is given in the form of an optiization problemin [6].
B. Contributions of the paper.
A model of observation of a scalar continuous latentvariable on a finite interval of time with noisy sensors is con-sidered. Each sensor has an access to only one measurementat one time instant. The process evolves in continuous timein the considered model. Measurement noises of all sensorsare independent random variables. The quality of estimationis evaluated according to the mean variance of the estimatorover time. The model studied here is simpler than that of[6] (because the measures are instantaneous), which allowsto study its properties in bigger detail. A qualitative studyof the optimal instants of measure reveals different behaviors(“regimes”) depending on the parameters. Analytic formulasfor different regimes are given in the present paper and provedin the Technical Report [2]. The optimal instant of measure isgiven by an analytic formula in case of one measure. In thecase of two measures, an iterative algorithm and a formula inthe form of a solution of a system of two equations are given.The main theoretical results of this paper are the optimalinstants of measures in the case of one or two measures (seeProposition 1 and Theorem 8). These results are illustratedby numerical computation of the optimal schedules when measures are available, the values of parameters being fixedor random.The paper is organized as follows. The general (multimodal,irregularly scheduled) Kalman estimation model and the costfunction are defined in Section II. The particular case, wherethe instant of only one measure is variable, has been studied inthe authors’ previous work [1]. The results of [1] are recalled a r X i v : . [ ee ss . SP ] F e b EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 2 and completed in Section III. The particular case, where theinstants of two measures are variable, is studied in Section IV.II. M
ODEL D ESCRIPTION AND O PTIMIZATION O BJECTIVE . A. The model of scalar Brownian Motion.
We assume that the estimation of the system state isdone by computing the time evolution of a parameter, andthat the variance of the estimation grows linearly betweenmeasurements. This simple assumption models the fact thatdecreasing the measure frequency decreases the accuracy onthe system state estimation. In this purpose, we consider areal Brownian motion θ ( t ) ( t ∈ [0 , T ] ), satisfying for t>s , θ ( t ) − θ ( s ) d ∼ N (0 , σ ( t − s )) i.e., the increments are Gaussianwith mean and variance σ ( t − s ) .Suppose n sensors can make measurements at moments t , . . . , t n (0 (cid:54) t (cid:54) · · · (cid:54) t n (cid:54) T ) . It is assumed that eachsensor k returns a measured value equal to X k at time t k . Nosubsequence of the sequence ( t , . . . , t n ) is constrained to beregular in any sense.Kalman filtering is used fr estimating the state θ ( t ) of thesystem using the results of the measures preceding t . Suppose,the initial state θ (0) is a Gaussian random variable of mean ¯ θ and variance v . Suppose that θ (0) , the measurement noise andthe evolution of the Brownian motion θ ( t ) are independent.The Kalman filter framework can apply with the state andmeasurement equations: θ ( t k ) = θ ( t k − ) + w k , w k d ∼ N (0 , σ ( t k − t k − )) (1) X k = θ ( t k ) + n k , n k d ∼ N (0 , v k ) . (2)By the theory of Kalman filtering (see [7]), the maximumlikelihood estimate ˆ θ t k t k of θ ( t k ) and its variance Γ t k t k aredefined by the following recursive equations: ˆ θ t k t k = ˆ θ t k − t k + K ( t k ) (cid:16) X k − ˆ θ t k − t k − (cid:17) ˆ θ t k − t k = ˆ θ t k − t k − Γ t k t k = Γ t k − t k − K ( t k )Γ t k − t k K ( t k ) = Γ t k − t k (cid:16) Γ t k − t k + v k (cid:17) − Γ t k − t k = Γ t k − t k − + σ ( t k − t k − ) , (3)(4)(5)(6)(7)where ˆ θ t l t k ( l ∈ { k − , k } ) is the maximum likelihood estimateof θ ( t k ) conditionally to the data available at time t l , and Γ t l t k is the variance of the estimate ˆ θ t l t k . K ( t k ) is the Kalman gainused for the update at time t k . In order for (7) to make sensefor k = 1 , define t = 0 and Γ t t = v .Remark that, by (5),(6), using the fact that all quantities arescalar, Γ t k t k = Γ t k − t k − (cid:16) Γ t k − t k (cid:17) Γ t k − t k + v k = v k Γ t k − t k v k + Γ t k − t k , (8)which is equivalent (by (7)) to (cid:0) Γ t k t k (cid:1) − = v − k + (cid:16) Γ t k − t k (cid:17) − = v − k + (cid:16) Γ t k − t k − + σ ( t k − t k − ) (cid:17) − . (9) v ( t ) t t t T v ( t ) t t t T (a) (b) Fig. 1. The function v ( t ) in particular cases. In (a) , v = , v = v = v =1 , T = 1 , σ = 1 and t =0 . , t =0 . , t =0 . . In (b) , v = , v =1 , v =2 , v =3 , T =1 , σ = 1 and t =0 . , t =0 . , t =0 . . The values of v , v , v control the differences of the variance before and after the measurement.In the first example, v , v , v are equal, in the second example they aredifferent. Therefore, each Γ t k t k is a rational function of σ , t , . . . , t k , v , . . . , v k .For each t ∈ [0 , T ] , denote v ( t ) the variance of ˆ θ ( t ) , i.e.the variance when the last measurement was taken plus theuncertainty due to the time without new feedbacks. It equals: v ( t ) = Γ t k t k + σ ( t − t k ) where k = max { i | t i (cid:54) t } . (10) v ( t ) is a piecewise linear function composed of line intervalsof slope σ . Two examples of functions v ( t ) are shown Figure1. B. Notation.
Throughout this paper, the notation ( a // b ) will stand for aba + b . Note that this notation allows to rewrite (8) in a morecompact way: Γ t k t k = v k // Γ t k − t k . (11)The notation v k,...,l (where (cid:54) k (cid:54) l (cid:54) n ) will standfor ( v k // v k +1 // · · · // v l ) . If k = 0 , v , ,...,l is the varianceof the Kalman estimator of θ (0) , which uses the informationof sensors , . . . , l supposing that these sensors are activatedat the instant . v , ,...,l is the smallest possible value of Γ t l t l . If k > , v k,...,l is the error variance of the equivalentdevice obtained by activating the devices number k, k +1 , . . . , l simultaneously. C. The Optimization Criterion, General Results and Nota-tions.
The following optimization criterion is chosen in this article:the mean of the variance v ( t ) of the maximum likelihood es-timator of θ ( t ) is minimized by choosing the measurement in-stants t , . . . , t n . This implies that the following cost functionis to be minimized under the constraint (cid:54) t (cid:54) t (cid:54) . . . (cid:54) t n (cid:54) T : J σ ,T,v ,v ,...,v n ( t , . . . , t n ) = (cid:90) T v ( t ) dt = σ t v t + σ ( t − t ) t t ( t − t ) + · · · + σ ( T − t n ) t n t n ( T − t n ) . (12) EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 3
One can remark that the cost function (12) is rational in its n +3 parameters σ , T, v , . . . , v n , t , . . . , t n .If this function is minimized in a unique point (cid:16) t ( n )1 , opt ( σ , T, v , . . . , v n ) , . . . , t ( n ) n, opt ( σ , T, v , . . . , v n ) (cid:17) , (13)these values are the optimal measurement instants. We canwonder where these instants are located, and especially if someof them are equal to zero. The minimizer is indeed unique inthe cases n = 1 , , which is proved in Subsection III-B andIV-E below.We are also interested in the behavior of the optimal mea-surement times as functions of T : monotonicity, asymptotic,etc. III. T HE OPTIMAL INSTANT OF ONE MEASURE . A. Overview of the Problem and Results.
In this Section, the above problem is studied for the par-ticular case where n = 1 measure can be performed. Allquestions listed above are solved in terms of explicit formulasin Section III-B. Solving this particular case is necessary fortackling more complex problems. Multimodality is of smallerimportance in this case, than in the more involved cases of n = 2 measures and n > measures.The cost function (12) takes the form J σ ,T,v ,v ( t ) = σ t v t + σ ( T − t ) σ t + v ) v ( T − t ) σ t + v + v . (14)Its behavior is shown Figure 2, (a) . Remark that the RHS termin equation (14) can be split into two terms: the ”rectangularterm” (cid:16) v t + ( σ t + v ) v ( T − t ) σ t + v + v (cid:17) and the ”triangular term” ( σ t + σ ( T − t ) ) , respectively accounting for the contri-butions of the rectangular and triangular shaped area in theintegral of v ( t ) , and shown on Figure 2, (b) . Minimizingthe cost function J T,v ,v ( t ) constitutes a tradeoff betweenminimizing these two terms.Different situations are possible as it can be seen on Figure2, (a) . One can define the regime 1 as the set of situationswhen t =0 is the optimum. Similarly, define the regime 2 asthe set of situations where the optimal t is in the interior ofthe interval [0 , T ] . Then, the optimal t is the point where thederivative of the cost function (14) vanishes. Its value is givenby (18). Remark that in the regime 2, the optimal t can belarger than T .The optimal instant of measure is given by the followingstatement. Proposition 1.
Let the parameters σ > , T > , v (cid:62) , v (cid:62) be fixed. The optimal instant of measure is t (1) opt ( σ , T, v , v ) = arg min t J σ ,T,v ,v ( t ) =max (cid:32) , − v − v + σ T + (cid:112) ( σ T + v +5 v ) − (4 v ) σ (cid:33) . (15) t J T v = 0 v = 0 . v = 0 . v = 0 . v = 1 . v = √ v = 1 . tv ( t ) t T Γ t t v (a) (b) Fig. 2. (a) : J σ ,T,v ,v ( t ) as function of v and t . The parameters are v = 1 , T = 1 , σ = 1 . The cost function is minimized at t = 0 if andonly if v (cid:62) √ . (b) : An example of a function v ( t ) showing the geometricinterpretation of the rectangular and the triangular terms of the expression(14) of the integral cost function. Here, the general notation t (1) (13) is simplified by drop-ping the unnecessary index .Proposition 1 is proved in Subsection III-B. B. Derivation of Proposition 1 and Properties of the optimalinstant of measure.
The behavior of the cost function can be studied using itspartial derivative: ∂ J σ ,T,v ,v ( t ) ∂t = v + σ t v + v + σ t × (cid:18) v + σ t − σ ( T − t ) (cid:18) v v + v + σ t + 1 (cid:19)(cid:19) . (16)Remark that the RHS of (16) is a product of two increasing(with respect to t ) factors, the first of which (cid:16) v + σ t v + v + σ t (cid:17) is nonnegative (this factor vanishes iff v and t = 0 ). Inaddition, this derivative is positive in the point t = T Therefore, the locus of positivity of ∂J T ( t ) ∂t is an interval ofthe form ] t (1) opt , T ] , where t (1) opt may equal zero or be strictlypositive. Consequently, two different behaviors of the costfunction are possible. In the first case (regime ), it isincreasing near t = 0 . Then, the cost function J σ ,T,v ,v ( t ) is increasing and convex on the whole interval [0 , T ] , andits global minimum is t (1) opt ( T ) = 0 . According to (16), thiscorresponds to T (cid:54) T (1) crit ( σ , v , v ) = v σ (cid:16) v v + v + 1 (cid:17) . (17)In the second case (regime 2), the cost function is decreasingnear t = 0 . This is observed when (17) does not hold, i.e. T is large or v is small. Then, the minimum of the cost functionis reached at the only nonzero point t (1) opt , where its derivative EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 4 (16) equals zero. By equating the derivative (16) to zero, onegets the following expression for t (1) opt t (1) opt = − v − v + σ T + (cid:112) ( σ T + v + 5 v ) − (4 v ) σ . (18)Remark that the duration T can be expressed from σ , t (1) opt , v , v in this case as a rational function: T = 2 t (1) opt + v − v σ + 2 v σ ( v + σ t (1) opt + 2 v )= T t ( σ , t (1) opt , v , v ) . (19)Using (17) and (18), it is easy to check that t (1) opt ( T (1) crit ) = − v − v + σ T (1) crit + (cid:113) ( σ T (1) crit + v + 5 v ) − (4 v ) σ = 0 , i.e., both formulas of regime and regime coincide if thevalues of the parameters lie on the boundary. This provesProposition 1.Remark that T (1) crit is an increasing function of v and adecreasing function of v and of σ . The limit cases of (17)have the following intuitive interpretations. If v (cid:28) v (theobserver has a precise knowledge about the state of the systemat the instant ), T (1) crit = 0 , therefore the next measure shouldnot be done in the same time. If v (cid:29) v (the measure isvery inexact), the measure should be scheduled for a momentdifferent from zero if T (cid:62) v σ . On the other hand, if v (cid:28) v (there is a possibility to gain precise knowledge about thesystem at an instant the observer can choose), then the measureshould be done as soon as possible if T (cid:54) v σ .Intuitively, “regime ” is observed when T is small or v islarge, which means that the prior information, that the observergets for free, is poor. In this case, it is penalizing not to take ameasure immediately in order to get better information. Moreformally, the rectangular term has an order of magnitude O ( T ) when T tends to zero, while the triangular term has an orderof magnitude O ( T ) . Therefore, when T is small enough,choosing t =0 should minimize both the rectangular term andthe sum.The following Proposition resumes some qualitative prop-erties of the optimal instant of measure. Proposition 2.
The function t (1) opt is differentiable everywhereexcept at the border between regime 1 and regime 2 . t (1) opt ( σ , T, v , v ) is increasing as a function of T (constant onthe interval T ∈ ]0 , T crit ] ), decreasing as a function of v andincreasing as a function of v . On the interval T ∈ [ T crit , + ∞ [ it is a concave and strictly increasing function of T . Itsasymptotic expansion is t (1) opt ( T ) = σ T + v − v σ + o (cid:18) T (cid:19) , (20) the function being always smaller than its asymptote: t (1) opt ( T ) < σ T + v − v σ . (21) When v is large, one gets the limit: lim v →∞ t (1) opt ( σ , T, v , v ) = max (cid:18) , σ T − v σ (cid:19) . (22)Proposition 2 is proved in Technical Report [2].The following intuitive argument can be given for the orderof magnitude of the optimal instant: t (1) opt ( T ) ∼ T (by (20)).When, T is large, the triangular term becomes more importantthan the “rectangular term”. Therefore, the minimum of thesum should be close to the value T , which minimizes thetriangular term.Remark that the dependence of t , opt in σ and T issimplified by the relation t (1) opt ( σ α , αT, v , v ) = αt (1) opt ( σ , T, v , v ) , (23)therefore, the ratio t (1) opt /T depends only on σ T, v and v . C. Bounds on the Cost Function.
One may ask for easy-to-compute lower and upper bounds J and ¯ J of the cost function J , which are independent of theinstant of measure. The value reached without measuring inthe interval (which is equivalent to measuring at t = T ) is atrivial upper bound: J T,v ,v ( t ) = (cid:90) T v ( t ) dt (cid:54) v T + σ T J ( T, v , v ) . (24)A lower bound is suggested by the article [3]. It leads toformulating the following. Theorem 3.
The cumulative variance of a Kalman filteris bounded below by the quantity given by (25) , which isindependent of the instant of measure t : J T,v ,v ( t ) > (cid:113) σ v , T = J ( T, v , v ) . (25)Theorem 3 is proved in Technical Report [2].Two numerical experiments have been performed in order tocompare the cost achieved by measuring at the optimal instantwith the cost achieved by using an intuitive strategy, and withthe lower bound J . Their results are shown Figure 3.In the first experiment (Figure (a) ), the costs achieved bymeasuring at the optimal instant have been computed and plot-ted together with the costs achieved by the intuitive strategiesof measuring at or at T , and with the corresponding valuesof the lower bound J . The values T = 1 , σ = 1 , v = 1 and v varying from to have been used for the parameters.In the second experiment (Figure (b) ), the costs J opt achieved by measuring at the optimal instant have beencomputed together with the costs J reg achieved by measuringat T . The values T = 1 , σ = 1 and v , v varying from to have been used for the parameters. Figure 3 (b) shows acontour plot of the gain J reg − J opt J reg as function of v , v .Figure 3 (a) shows that measuring at the best instant among and T leads to a performance close to the optimal. Finding thecorrect ”regime“ is more important, therefore, than computingthe optimal instant with high precision. The contour plotFigure 3 (b) shows that for parameters v , v in the consideredrange, the gain can reach . EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 5 . . . . v J T,v ,v (0) J T,v ,v ( T ) J T,v ,v ( t (1) opt ) J ( v ) v v (a) (b) Fig. 3. (a)
The costs for different choices of the instant of measurement t compared with the lower bound J (see its definition (25)). The parameters equal: T = 1 , σ = 1 , v = 1 . (b) The contour plot of the gain of measuring at the optimal instant compared to measuring at T . The gain is defined as J reg − J opt J reg .The parameters equal: T = 1 , σ = 1 , v , v ∈ [0 , . D. Kalman filter with one Measure per Window, where theWindows are Periodic
If only one measure is possible during a finite time interval,the optimal instant for this measure has been determined.When a Brownian motion is observed over an infinite time,the following scheduling strategy can be established: measureat moments t , ∈ [0 , T ] , t , ∈ [ T, T ] , . . . , t ,k ∈ [( k − T, kT ] , . . . , where t , is chosen in order to minimize themean variance over the interval [0 , T ] , then t , is chosen inorder to minimize the mean variance over the interval [ T, T ] provided that the value v ( T ) (which depends on t , ) is usedas v (i.e., T is the left endpoint of the interval, and v ( T ) isthe variance of the prior information about θ ( T ) ), etc.The parameters are: T, v (the error variance of every mea-sure) and v (the variance of the prior information about θ (0) ).The intervals [0 , T ] , [ T, T ] , . . . will be called “windows”.The main result of this section is Theorem 4: for k bigenough, t ,k = ( k − T , i.e. the measures are done at the leftendpoints of the corresponding “windows”. Theorem 4.
In the setting described above, the sequence ofmeasurement instants satisfies: t ,k = ( k − T for k largeenough. Therefore, it is ultimately periodic. Theorem 4 is proved in Technical Report [2]. Figure 4illustrates this setting:One can remark that the result above resembles the resultsof [9]. In [9], the moments of measure are strictly periodic,while the sensor is chosen using a local optimization. On theother hand, in the present setting, the sensor cannot be chosen,while the instants of measure are chosen in periodic windows.Ultimate periodicity holds as a qualitative result in both cases. IV. T
HE OPTIMAL INSTANTS OF TWO MEASURES . A. Overview of the Results.
In this section, it is supposed that the observer is allowedto choose the instants t and t (0 (cid:54) t (cid:54) t (cid:54) T ) for n =2 measures with measurement noises v , v respectively.Certain questions listed in Section II-C above are answeredwith explicit formulas.The cost function (12) can be expressed in one of the forms: J σ ,T,v ,v ,v ( t , t ) = σ t v t + σ (cid:52) t σ t + v ) v (cid:52) t σ t + v + v + σ ( T − t ) t t ( T − t ) = σ t v t + J σ ,T − t , Γ t t ,v ( t − t ) = σ t v t + σ (cid:52) t σ t + v ) v (cid:52) t σ t + v + v + σ ( T − t ) v ( T − t ) (cid:0) v ( v + σ t ) + σ (cid:52) t ( v + v + σ t ) (cid:1) ( v + v )( v + σ t ) + v v + σ (cid:52) t ( v + v + σ t ) , (26)where (cid:52) t = t − t .It is proved (Theorems 5,6,7) that this cost function has aunique coordinatewize local minimum which is, therefore, aglobal minimum. A coordinatewize local minimum (CWLM)is defined, in an analogous way to [12] as follows. Definition 1.
Let f : D ⊂ R → R be a real-valued function,and let z = ( z , z ) ∈ D . Then the point z is called acoordinatewize local minimum (CWLM) of f if ∃ (cid:15) > ∀ d ∈ ] − (cid:15), (cid:15) [ , ( z + d, z ) ∈ D = ⇒ f ( z ) + ( d, (cid:62) f ( z ) and ( z , z + d ) ∈ D = ⇒ f ( z ) + (0 , d ) (cid:62) f ( z ) . EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 6 tvv = 0 . t T t , T . T . T . Fig. 4. iterations of the function F T,v from the initial value v = . The parameters are v = 1 , σ = 1 , T = . The values v , v ( T ) , v (2 T ) , . . . arewritten on the figure. The argmin of J σ ,T,v ,v ,v (unique) is denoted ( t (2)1 , opt ( σ , T, v , v , v ) , t (2)2 , opt ( σ , T, v , v , v )) (27)in accordance with the general notation (13).One of the general remarks is that, if t is fixed, thesubproblem of determining the optimal instant t relative to t is reduced to determining the optimal instant of one measure(see Section III) with the following parameters: the length ofthe process is T − t , the variance of the estimate of the initialstate is Γ t t , the variance of the error of the measure is v . arg min t J σ ,T,v ,v ,v ( t , t ) = t + t (1) opt ( σ , T − t , Γ t t , v ) . (28)Finding the minimum of the cost function J σ ,T,v ,v ,v ,studying its properties (uniqueness, position, etc) and its de-pendence on the parameters, such as monotonicity, continuity,is the goal of this section. An important property of theminimum is its position on the border or in the interior of thedomain of definition of the function. It is sufficient to considerthree qualitatively different properties of the optimal schedule(“regimes”): either t = t (regime 1) or t < t (regime 2) or < t (cid:54) t (regime 3). Figure 5 shows examplesof the cost function, which correspond to different regimes.This consideration is analogous to the one made in case ofone measure. Regime 1 is observed when T is small enough. Then, if t = 0 is fixed, the optimal instant for the second measure(determined by (28)) is also zero. By Theorem 5 below, this isequivalent to saying that (0 , is the globally optimal scheduleof measures.When regime 1 is not observed, the optimal instant forthe second measure is strictly positive. One can search theoptimal schedule using the coordinate descent from (0 , . Thefirst step is finding the optimal instant of the second measurewhen the first measure is done at using (28). Call this instant t (cid:104) (cid:105) ∈ ]0 , T ] . On the second step, find the optimal instant of thefirst measure, when the second measure is done at t (cid:104) (cid:105) . Callthis instant t (cid:104) (cid:105) ∈ [0 , t (cid:104) (cid:105) [ . If t (cid:104) (cid:105) = 0 , the algorithm finishesand returns the schedule (0 , t (cid:104) (cid:105) ) . This situation will be called regime 2 . By Theorem 6, this schedule is indeed optimal. In regime 3 , the coordinate descent does not terminate afterthe first steps, i.e., t (cid:104) (cid:105) > . Then it is optimal to performboth measures in the interior of the interval [0 , T ] (Theorem 6).The distinction between regime 2 and regime 3 can be doneby computing the partial derivative with respect to t of thecost function (26) at (0 , t (cid:104) (cid:105) ) or, equivalently, by comparing T to a critical value.The largest duration T , such that regime 1 is observed, willbe denoted T (2)2 , crit (can be computed using (29)). Similarly, thelargest duration T , such that regime 1 is observed, will bedenoted T (2)1 , crit (can be computed using (34)).Figure 6 shows different examples of functions t (cid:55)→ J σ ,T,v ,v ,v ( t , t (cid:104) (cid:105) ) , which can be observed during the sec-ond step of this coordinate descent in sample situations.Section IV is organized as follows.A criterion of regime 1 together with a proof that theoptimal schedule does not satisfy < t (2) opt = t (2)2 , opt (Lemma1) is given in Subsection IV-C. The critical regime, on theborder between regimes and , is studied in SubsectionIV-D. In particular, formulas in closed form are found forfinding, to which regime belongs a given set of parameters σ , T, v , v , v .Equations for the optimal instants in regime follow fromthe results of Subsection IV-D. These are discussed in SectionIV-E. Some properties of the optimal instants are deduced fromthese equations.The coordinate descent algorithm can be used for findingthe optimal measurement instants in regime 3 . It is shownthat this algorithm cannot converge to a point different fromthe global minimum of the cost function. This follows fromthe uniqueness of a CWLM of the cost function J ( t , t ) (Theorem 9, Section IV-E). B. Strategy of proof.
Proving the uniqueness of a CWLM of the cost function J ( t , t ) is done by considering first the borders of its domainof definition, then the interior. The border t = t (representedby the diagonal in the plots Figure 5) is studied in SubsectionIV-C. The border t = 0 (represented by the left side in theplots Figure 5) is studied in Subsection IV-D. The scheduleson the border t = T can be improved upon by decreasing t EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 7 t1 t t1 t t1 t Fig. 5. Examples of the cost function J σ ,T,v ,v ,v ( t , t ) . In all plots, σ = 1 and v = v = v = 1 . In the first example, T = 0 . . In the secondexample, T = 0 . . In the third example, T = 1 . . . . . . t J . . . . . . t J . . t J (a) (b) (c) . . . . . t J . . . . . t J . . . . t J (d) (e) (f) Fig. 6. Examples of the cost function t (cid:55)→ J σ ,T,v ,v ,v ( t , t ) in the examples of Figure 7. One can observe the difference between regime 2 (thefunction is increasing) and regime 3 (the minimum is located inside the interval). Example v v v t < > T t < > Regime T , crit t (2)1 , opt t (2)2 , opt a) . Critical . b) ≈ .
94 0 . . . c) ≈ .
17 0 . d) ≈ .
17 0 . . . . e) ≈ .
62 1 . . . . f) . . . . Fig. 7. Sample examples of the problem of seeking the optimal instants of two measurements. In all examples, it is supposed that σ = 1 and the regime is not observed. The columns v , v , v , t < > are parameters, while the other columns can be computed using the formulae of the present article. Figure 6shows the functions to optimize when finding t < > during the first step of the coordinate descent. according to the results relative to one measure. The interioris studied in Subsections IV-D and IV-E using the previousresults. C. Simultaneous measurements ( t = t ). Taking both measures at the same time makes them equiv-alent to a single measure of smaller error variance v , .Therefore, the performance of such schedule is the same asone achieved by one measure. Lemma 1 shows that, except thecase where the measures are at the instant , such schedule can be improved upon by a small displacement of the instant ofone measure. The rest of this subsection is devoted to studyingthe optimality of taking both measures at ( regime 1 ). Lemma 1.
Consider the cost function (26) defined on thetriangular domain T T = { ( t , t ) s.t. (cid:54) t (cid:54) t (cid:54) T } . Let < t < T . Then the point ( t , t ) is not a coordintewizelocal minimum of J σ ,T,v ,v ,v ( t , t ) . Lemma 1 is proved in Technical Report [2]. It correspondsto the intuitive idea that the instants of measure have a
EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 8 tendency to “repulse” each other.The following criterion for deciding whether both optimalinstants equal zero ( regime 1 ) extends the criterion (17) fromthe case of one measure to the case of two measures.
Theorem 5.
The global minimum of the cost function (26) isreached at the point (0 , if and only if T (cid:54) T (2)2 , crit ( σ , v , v , v ) = T (1) crit ( σ , v , , v ) = v , σ (cid:16) v v , + v + 1 (cid:17) . (29) Moreover, when (29) holds, the point (0 , is the uniqueCWLM of the function (26) .Proof. Direct part . Suppose the minimum is at (0 , . Inparticular, the function t (cid:55)→ J σ ,T,v ,v ,v (0 , t ) = J σ ,T,v , ,v ( t ) (30)has its minimum at t = 0 , therefore the regime 1 inthe sense of a single measure is observed (cf SubsectionIII-B). Therefore, the criterion (17) applied to the parameters σ , T, v , , v is valid. This is (29). Inverse part . Suppose T (cid:54) T (2)2 , crit . For any t ∈ [0 , T [ theminimum of the function [ t , T ] (cid:51) t (cid:55)→ J σ ,T,v ,v ,v ( t , t ) = σ t v t + J σ ,T − t , Γ t t ,v ( t − t ) (31)is the same as the minimum of [ t , T ] (cid:51) t (cid:55)→ J σ ,T − t , Γ t t ,v ( t − t ) (32)and can be found using the results of Subsection III-B. Moreprecisely, regime is observed. Indeed, as the function v (cid:55)→ T (1) crit ( σ , v , v is increasing and Γ t t (cid:62) v , , one has T − t (cid:54) T (cid:54) T (1) crit ( σ , v , , v ) (cid:54) T (1) crit ( σ , Γ t t , v ) . (33)This implies that, under the hypothesis T (cid:54) T (2)2 , crit , allCWLM’s of the cost function J σ ,T,v ,v ,v ( t , t ) are pointsof the type ( t , t ) , that is, on the diagonal. By Lemma 1, theonly candidate for being a CWLM of J σ ,T,v ,v ,v ( t , t ) isthe point (0 , , which is, therefore, its global minimum.Remark that the critical duration T (2)2 , crit is an increasingfunction of v and of v , a decreasing function of σ andof v .It is proved in this subsection that a CWLM on the diagonalcan only be achieved at (0 , . Moreover, (29) provides a nec-essary and sufficient condition (depending on the parameters),which allows one to check whether (0 , is indeed a CWLM.This is equivalent to regime 1 . D. The boundary t = 0 . If (29) does not hold, consider the boundary t =0 . Takingthe first measure at zero leads to the same performance asthe setting with one measure of error variance v and initialinformation of smaller error variance v , . Theorem 6 showsthat the optimal schedule is of this type for some values ofparameters. This subsection is devoted to studying when thisis satisfied.The following result answers the question, whether theminimum is located on the boundary. Theorem 6.
The global minimum of the cost function J σ ,T,v ,v ,v is located on the line (0 , · ) iff T (cid:54) T (2)1 , crit ( σ , v , v , v ) , where T (2)1 , crit ( σ , v , v , v ) = T t ( σ , t , , crit ( σ , v , v , v ) , v , , v ) . (34) Here, the function T t is defined by T t ( σ , t , v , v ) = 2 t + v − v σ + 2 v σ ( v + σ t + 2 v ) (35) and σ t , , crit ( σ , v , v , v ) is the largest root of the equation Ax + Bx + Cx + D = 0 (36) with coefficients A ( v , v , v ) = − ( v + v ) ( v + 2 v ) , (37) B ( v , v , v ) = ( v + v ) × ( v − v + v )( v + 2 v ) v + v v )) , (38) C ( v , v , v ) = v B ( v , v , v )+ v (2 v + 3 v )( v v + v v + v v ) , (39) D ( v , v , v ) = v ( v + v )( v + v ) × ( v v + 2 v v + 3 v v ) . (40) If T (cid:54) T (2)1 , crit ( σ , v , v , v ) , the minimum of the cost functionis located at the point (0 , t (cid:104) (cid:105) ) , where t (cid:104) (cid:105) = t (1) opt ( σ , T, v , , v ) (41) according to the more general equation (28) . Theorem 6 is proved in the Technical Report [2].According to Theorems 6 and 5, the optimal schedule is ofthe form (0 , · ) , but not (0 , if and only if T (2)2 , crit ( σ , v , v , v ) < T (cid:54) T (2)1 , crit ( σ , v , v , v ) , (42)where T (2)1 , crit is defined by (34)-(40) and T (2)2 , crit is defined by(29). This case will be called regime 2 .The proof of Theorem 6 immediately leads to the followingcorollaries. Corollary 1. If T (cid:54) T (2)1 , crit ( σ , v , v , v ) , the point (0 , t (cid:104) (cid:105) ) is the only CWLM of the cost function. Corollary 2.
Let σ , v , v ∈ R ∗ + . Then, the critical durations T (2)1 , crit , T (2)2 , crit as well as the duration t , , crit , appearing in theformulation of Theorem 6, are strictly increasing functions of v . EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 9
The quantity t , , crit , appearing in the formulation of The-orem 6 has the following interpretation: the optimal schedulein the case of the duration T = T (2)1 , crit ( σ , v , v , v ) is (0 , t , , crit ) . E. The general case and its Properties.
Suppose that the process is long enough, i.e.
T >T (2)1 , crit ( σ , v , v , v ) . Therefore, regime 3 is observed (The-orem 6). In this subsection, equations for determining theoptimal instants of measure t (2)1 , opt and t (2)2 , opt will be derived.Nontrivial optimal instants, which cannot be computed usingformulae for measure, are defined in this section, and someproperties of these optimal instants are proved. Theorem 7.
Suppose that the length of the process islarger than the critical durations:
T > T (2)1 , crit ( σ , v , v , v ) .Then, the cost function J σ ,T,v ,v ,v has a unique CWLM ( t (2)1 , opt , t (2)2 , opt ) , and it satisfies t (2)2 , opt − t (2)1 , opt = t (1) opt ( σ , T − t (2)1 , opt , ( v + σ t (2)1 , opt ) //v , v ) , (43) t (2)2 , opt − t (2)1 , opt = t , , crit ( σ , v + t (2)1 , opt , v , v ) , (44) where the function t , , crit is defined as the largest realroot of the equation (36) with coefficients (37) - (40) , and thefunction t (1) opt is defined Equation (15) . Moreover, the system ofequations (43) , (44) has a unique solution with respect to thevariables t (2)1 , opt , t (2)2 , opt − t (2)1 , opt ∈ R ∗ + . Theorem 7 is proved in Technical Report [2].The system of equations (43),(44) is of the form y = I (1) σ ,T,v ,v ,v ( x ) y = I (2) σ ,v ,v ,v ( x ) where x = t (2)1 , opt and y = t (2)2 , opt − t (2)1 , opt . It is interesting to studythe behavior of the functions I (1) and I (2) , which appear inthis system. Their full definitions are I (1) σ ,T,v ,v ,v (˜ t ) = if ˜ t (cid:62) Tt (1) opt ( σ , T − ˜ t , ( v + σ ˜ t ) //v , v ) otherwize. (45)and I (2) σ ,v ,v ,v (˜ t ) = t , , crit ( σ , v + σ ˜ t , v , v ) . (46)Here, the continuation of the function I (1) (˜ t ) by zero forlarge values serves a purely technical purpose.The function I (2) has the following interpretation. Giventhe parameters σ , v , v , v , to each ˜ t ∈ R + , a uniqueduration T is associated such that ˜ t is the optimal instantof the first measure. Then, I (2) ( t ) is the distance betweenthe optimal instants for the duration T . By Corollary 2, it isstrictly increasing.The function I (1) has a simpler definition and its inter-pretation is: given σ , T, v , v , v , it associates to each ˜ t (suboptimal in general) the best interval ˜ t − ˜ t between themeasures. The function I (1) “selects” the point associated to t t − t I (2) ( t ) T =1 T =2 Fig. 8. The functions I (2) ( t ) and I (1) T ( t ) . The fixed parameters are σ = 1 , v = 1 , v = 2 , v = 3 . T takes values , . , . , . , . , .The optimal instant of first measure t (2)1 , opt is the abscissa of the point ofintersection. The distance between the measures in the optimal schedule isthe ordinate of the point of intersection. the given length T on the graph of I (2) σ ,v ,v ,v . It is decreasingby Proposition 2. The optimal schedule corresponds to theintersection point of the graphs of these functions.Figure 8 shows an example of the behavior of the functions I (1) and I (2) .The next theorems assemble results for all three regimes andanswer to conjectures announced in Sections II-C and IV-A. Theorem 8.
Let σ , v , v , v ∈ R ∗ + . For each T (cid:62) , theminimizer of the cost function J σ ,T,v ,v ,v is unique.The functions T (cid:55)→ t (2)1 , opt ( σ , T, v , v , v ) and T (cid:55)→ t (2)2 , opt ( σ , T, v , v , v ) are continuous and monotonically in-creasing. Theorem 8 is proved in the Technical Report [2].Moreover, the cost functions J ( t , t ) have uniqueCWLM’s. Theorem 9.
Let σ , v , v , v , T ∈ R ∗ + . Then, the function J σ ,T,v ,v ,v ( t , t ) has a unique CWLM.Proof. The theorem follows from Theorem 5 (in case of regime 1 ), Theorem 6 (in case of regime 2 ) and Theorem7 (in case of regime 3 ).The global behavior of the optimal instants is illustratedFigure 9. Both measures should be done as fast as possiblefor small T ( regime 1 ). When the duration T is larger thana critical value T (2)2 , crit , the instant of the second measurebecomes distinct from zero and increases ( regime 2 ). Whenthe duration T is larger than another critical value T (2)1 , crit , theinstant of the first measure becomes distinct from zero as welland increases ( regime 3 ). Both optimal instants of measureexhibit continuity at the critical durations. This behavior is inaccordance with Theorems 5, 6 and 8. EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 10 . . . . . t , opt ( T ) t , opt ( T ) T (2)2 , crit T (2)1 , crit t , , crit Tt , opt ( T ) t , opt ( T ) Fig. 9. t (2)1 , opt ( σ , T, v , v , v ) and t (2)2 , opt ( σ , T, v , v , v ) as functions of T in a particular case. The parameters are σ = 1 , v = v = v =1 , T ∈ ]0 , . In this example, the critical durations equal: T (2)2 , crit = 0 . and T (2)1 , crit = . For T = , the optimal schedule is (0 , . . F. The numerical algorithm of Coordinate Descent.
Theorem 7 provides a convenient theoretical description ofthe optimal schedule ( t (2)1 , opt , t (2)2 , opt ) in regime 3 . Let us lookfor an efficient algorithm for finding numeric values of theseinstants. The coordinate descent is proposed as such algorithmin this article. The first step of the coordinate descent isimportant as well in defining the regimes. This algorithmis described in Appendix A.Updating t is finding the minimum of a cost function J ( t ) of a special type defined on a real interval. The golden-sectionsearch is used in this step. Some examples of functions thisclass are given Figure 6. It can be conjectured that all functionsof this class are quasi-convex. If the function J ( t ) is quasi-convex, the golden-section search is guaranteed to converge tothe minimum of this function.The cost function J ( t , t ) is guaranteed to have only oneCWLM, therefore the coordinate descent cannot converge toa point different from the global minimum of the function. G. The Experimental Performance of the Coordinate Descent. random runs of the algorithm have been performed inorder to explore its convergence and the speed of convergence.The parameters σ = 1 and T = 10 were fixed and thetriples ( v , v , v ) were chosen randomly from the region ofthe cube [1 , , which corresponds to regime 3 , according tothe uniform distribution. More precisely, candidate points werechosen in [1 , , then they were use in the experiment if theysatisfied the condition of regime 3 : T (2)1 , crit ( σ , v , v , v ) > T .The results are shown Figure 10. They suggest an exponen-tial convergence. Furthermore, the steps became shorter than · − after less than steps in all runs. H. Comparison between the optimal and the regular sched-ules.
A numerical experiment of estimation of the gain of theoptimal schedule compared to the intuitive sampling ( T , T ) has been done. The optimal schedules ( t (2)1 , opt , t (2)2 , opt ) havebeen computed together with the associated costs J opt for σ = 1 , T = 1 , v ∈ { , , } and v , v varying from to . The costs J reg achieved with the regular sampling havebeen computed as well. Figure 11 shows three contour plotsof the gain J reg − J opt J reg as functions of v , v .These figures can be compared with the gain in case of measure (Figure 3). For parameters in the considered range,the gain can reach up to .V. C ONCLUSION AND P ERSPECTIVES
Sampling strategies for a phenomenon of finite length havebeen investigated under the assumption that the phenomenoncan only be measured a small number of times by instrumentswith different properties (error variances). Irregular samplingcan lead to a substantial gain in mean error variance of theestimator.The assumption of a small number of available measures canbe satisfied if the process itself is short or the measurementdevices have a limited (and non-renewable) physical resource,e. g., [11]. This can also happen if each measure is expensive.A simple model is studied, where the variance about thesystem parameters (here a single parameter) evolving over afinite period of time grows linearly in the absence of measure.The properties of the optimal measure timetable accordingto the criterion of minimization of the mean variance areconsidered.In Section III, the particular case, where the instant ofexactly measure is to be chosen, is studied in detail. SectionIV is devoted to the particular case, where the instants of measures are to be chosen.The system can behave in different regimes. When theduration of the process is short, it is optimal to take allmeasures at the moment zero. If it is larger, than a criticalvalue, one optimal instant of measure moves from zero to theinside of the interval. In the case of one measure, there is onecritical duration, while in case of two measures there are two.It is proved that the critical durations in case of 1 or 2measures are increasing functions of v . This correspondsto a simple intuition: the larger v is, the less exact is theinformation, the higher are chances that it should be supportedby a measure. This corresponds to the intuition stated inthe introduction: in the optimal sampling, the more precisemeasure may be made shortly after the less precise one.The computations relative to the case n = 2 (shownFigure 11) suggest that when v (cid:28) σ T, v , v , the gain incomparison with the regular schedule is modest. On the otherhand, it increases if the variance v of the initial informationincreases or if the variances v , v of the measures are verydifferent. The first conclusion is also confirmed experimentallyin case of measure (see Figure 3 (b) ).A setting, where a large number of measures are made undera constraint of periodic “windows”, is considered Section EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 11
III-D. The instants of measurement are determined using localoptimization. It is shown that local optimization leads toregular sampling (Theorem 4) when the number of measuresis large. This result suggests that the global optimization isnecessary for getting an improvement of performance.One goal of the future research is to find the optimal (in thesense of the cost function (12)) measurement instants when thenumber of measures is n > . The methods of this article canbe adapted. Some qualitatively new conjectures also appearfrom the experiments in this setting. Allowing the number ofmeasures to vary is another possible development of the resultspresented here.In this problem, the order of the measures is fixed. It isalso possible to allow it to vary. The main property of thisproblem is the fact that the cost function is no longer rational,but piecewise-rational.Another objective of the future research is to consider morecomplex models than the real Brownian motion consideredpresently. A PPENDIX P SEUDO - CODE OF THE COORDINATE DESCENTALGORITHM . if T (cid:54) v , σ (cid:16) v v , v +1 (cid:17) then return (0 , (regime 1) else t (cid:104) (cid:105) := t (1) opt ( σ , T, v , , v ) if A ( v , v , v )( σ t (cid:104) (cid:105) ) + B ( v , v , v )( σ t (cid:104) (cid:105) ) + C ( v , v , v ) σ t (cid:104) (cid:105) + D ( v , v , v ) (cid:62) then return (0 , t (cid:104) (cid:105) ) (regime 2) else Initialization (regime 3, coordinate descent) t := t (cid:104) (cid:105) t := t (cid:104) (cid:105) = arg min t J σ ,T,v ,v ,v ( t , t (cid:104) (cid:105) ) repeat t := t (cid:104) k (cid:105) = t (cid:104) k − (cid:105) + t (1) opt ( σ , T − t (cid:104) k − (cid:105) , ( v + σ t (cid:104) k − (cid:105) ) //v , v ) t := t (cid:104) k (cid:105) = arg min t J σ ,T,v ,v ,v ( t , t (cid:104) k (cid:105) ) until convergence end if end if Fig. 12. Compute the optimal instants of measures. Arguments: σ , T, v , v , v ∈ R ∗ + . R EFERENCES[1] A. Aksenov, P.-O. Amblard, O. Michel, Ch. Jutten, “Optimal mea-suremet times for observing a Brownian motion over a finite period usinga Kalman filter” (2016),
Lecture Notes in Computer Science , Volume10169.[2] A. Aksenov, P.-O. Amblard, O. Michel, Ch. Jutten, “Technical reportfor the article “Optimal Measurement Times for a Small Number ofMeasures of a Brownian Motion over a Finite Period”.[3] A.Bourrier, P.-O. Amblard, O.Michel, Ch.Jutten. “Multimodal Kalmanfiltering”
IEEE International Conference on Acoustics, Speech andSignal Processing (ICASSP) , Mar 2016, Shanghai, China.[4] Vijay Gupta, Timothy H. Chung, Babak Hassibi, Richard M. Murray“On a Stochastic Sensor Selection Algorithm in Sensor Scheduling andSensor Coverage” textitAutomatica, Vol. 42, Issue 2 (Feb 2006), 251–260.[5] David L.Hall, James Linas “An Introduction to Multisensor Data Fusion”textitProceedings of the IEEE, Vol. 85, No. 1 (Jan 1997), 6–38.[6] K. Herring, J. Melsa, “Optimum measurements for estimation”
IEEETransactions on Automatic Control , Vol. 19, Issue 3 (1974), 264–266.[7] Andrew H. Jazwinski,
Stochastic processes and filtering theory , Math-ematics in science and engineering. Academic Press, New York, 1970,UKM[8] D.Lahat, T.Adalı, Ch.Jutten “Multimodal Data Fusion: An Overview ofMethods, Challenges and Prospects”
Proccedings of the IEEE , Vol. 103,Issue 9 (Sept 2015), 1449–1477.[9] L.Orihuela, A.Barreiro, F.G´omez-Estern, F.R.Rubio “Periodicity ofKalman-based scheduled filters”
Automatica (2014), 2672-2676.[10] Stergios I. Roumeliotis, George A. Bekey “Distributed Multi-RobotLocalization” Distributed Autonomous Robotic Systems , 179–188.[11] P.G. Ryan, S.L. Petersen, G. Peters, D. Gr´emillet “GPS tracking amarine predator: the effects of precision, resolution and sampling rateon forading tracks of African Penguins.” Marine Biology (2004),215-223.[12] P.Tseng “Convergence of a Block Coordinate Descent Method forNondifferentiable Minimization.”
Journal of Optimization Theory andApplications
No.3 (June 2001), 475–494.[13] Andrew Warrington and Neil Dhir “Generalising Cost-Optimal ParticleFiltering”
ICRA 2018: Workshop on Informative Path Planning andAdaptive Sampling (May 2018)
EEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO. 12 iteration k t h k + i − t h k i iteration k t h k + i − t h k i iteration k -20-15-10-50 l n ( t h k + i − t h k i ) (a) (b) (c) iteration k -20-15-10-50 l n ( t h k + i − t h k i ) iteration l e n g t h o f t h e un ce r t a i n t y i n t e r v a l o f t − t iteration k J ( t h k i , t h k i ) − J ( t h k + i , t h k + i ) (d) (e) (f) Fig. 10. Test performances of coordinate descent. The fixed parameters are σ = 1 , T = 10 . v , v , v have been drawn uniformly w.r.t. the Lebesguemeasure from the part of the cube [1 , which corresponds to regime 3 . (a) The increments of t . (b) The increments of t . (c) The natural logarithms ofthe increments of t . (d) The natural logarithms of the increments of t . (e) The difference I (1) σ ,T,v ,v ,v ( t ) − I (2) σ ,v ,v ,v ( t ) . According to Theorem 7,the values of the functions I (1) , I (2) are estimations of the difference t (2)2 , opt − t (2)1 , opt and they are equal only for the optimal value of t . (f) The decrementsof the cost function J . The abscissa of every graph is the number of the step. The lines join the mean values of the corresponding quantities over all trials.The vertical error bars show the maxima and the minima. v v v v v v (a) (b) (c) Fig. 11. The relative gain achieved by the optimal schedule as functions of v , v ∈ [0 , . (a) v = 0 . The contours correspond to the values . , . ,and . to . by steps of . . (b) v = 2 . (c) v = 5= 5