An Intelligent Scheme for Uncertainty Management of Data Synopses Management in Pervasive Computing Applications
AAn Intelligent Scheme for Uncertainty Managementof Data Synopses Management in PervasiveComputing Applications
Kostas Kolomvatsos
Department of Informatics and TelecommunicationsUniversity of Thessaly
Papasiopoulou 2-4, 35131 Lamia [email protected]
Abstract —Pervasive computing applications deal with theincorporation of intelligent components around end users tofacilitate their activities. Such applications can be provided uponthe vast infrastructures of Internet of Things (IoT) and EdgeComputing (EC). IoT devices collect ambient data transferringthem towards the EC and Cloud for further processing. ECnodes could become the hosts of distributed datasets wherevarious processing activities take place. The future of EC involvesnumerous nodes interacting with the IoT devices and themselvesin a cooperative manner to realize the desired processing. Acritical issue for concluding this cooperative approach is theexchange of data synopses to have EC nodes informed about thedata present in their peers. Such knowledge will be useful fordecision making related to the execution of processing activities.In this paper, we propose n uncertainty driven model for theexchange of data synopses. We argue that EC nodes shoulddelay the exchange of synopses especially when no significantdifferences with historical values are present. Our mechanismadopts a Fuzzy Logic (FL) system to decide when there is asignificant difference with the previous reported synopses todecide the exchange of the new one. Our scheme is capableof alleviating the network from numerous messages retrievedeven for low fluctuations in synopses. We analytically describeour model and evaluate it through a large set of experiments.Our experimental evaluation targets to detect the efficiency ofthe approach based on the elimination of unnecessary messageswhile keeping immediately informed peer nodes for significantstatistical changes in the distributed datasets.
Index Terms —Edge Computing, Edge Mesh, Internet ofThings, Data Management, Data Synopsis
I. I
NTRODUCTION
The Internet of Things (IoT) provides a huge infrastructurewhere numerous devices can interact with end users and theirenvironment to collect data and perform simple processingactivities [29]. IoT devices can report their data to the EdgeComputing (EC) infrastructure and Cloud for further pro-cessing. As we move upwards from the IoT to the EC andCloud, we meet increased computational resources, however,accompanied by increased latency. EC has been proposed asthe paradigm adopted to be close to the IoT platform and endusers involving increased processing capabilities (compared toIoT) to reduce the latency we enjoy when relying to Cloud. EC deals with the provision of storage and processing capabil-ities from various heterogeneous devices [29]. EC nodes canbecome the hosts of distributed datasets formulated by thereports of IoT devices. There, we can incorporate advancedservices to produce knowledge and analytics to immediatelyrespond to any request, thus, supporting real time applications.The aforementioned distributed datasets become the subjectof numerous requests having the form of processing tasks orqueries. Various research efforts study the selection of datahosts based on their available memory and battery levels [1]to perform the execution of tasks/queries. The future of ECinvolves nodes that are capable of cooperating to performthe desired tasks/queries. Under this ‘cooperative’ perspective,having a view on the statistics of the available datasets mayassist in the definition of efficient tasks/query allocations. Forinstance, an EC node may decide to offload a task/query forvarious performance reasons. The research community hasalready proposed data migration [9] as a solution to efficientlyrespond to requests. However, migrating huge volumes of datamay jeopardize the stability of the network due to the increasedbandwidth required to perform such an action. A solution is theoffloading of tasks/queries, however, the allocation decisionshould be based on the data present in every peer node. Thedecision making should be realized upon the statistics of theavailable datasets to conclude the most appropriate allocation.In this paper, we focus on the autonomous nature of ECnodes and propose a scheme for distributing data synopsesto peers. We argue for the dissemination of the synopsis ofeach dataset to have the insight on the data present in peers.We propose the monitoring of synopses updates and detectwhen a significant deviation (i.e., the magnitude) with theprevious reported synopsis is present. We define an uncertaintydriven model under the principles of Fuzzy Logic (FL) [30] todecide when an EC node should distribute the synopsis of itsdataset. The uncertainty is related to the ‘threshold’ (upon thedifferences of the available data after getting reports from IoTdevices) over which the node should disseminate the currentsynopsis. We monitor the ‘statistical significance’ of synopsesupdates before we decide to distributed them in the network. a r X i v : . [ c s . D C ] J u l his way, we want to avoid the continuous distribution of syn-opses especially when no significant difference is present. Weconsider the trade off between the frequency of the distributionand the ‘magnitude’ of updates. We can accept the limitedfreshness of updates for gaining benefits in the performanceof the network. Our FL-based decision making mechanismsadopts Type-2 FL sets to cover not only the uncertainty inthe decision making but also in the definition of membershipfunctions for every FL set. We apply our scheme upon past,historical observations (i.e., synopses updates) as well as uponfuture estimations. We adopt a forecasting methodology forestimating the ‘trend’ in synopses updates. Both, the view onthe past and the view on the future are fed into our Type-2FL System (T2FLS) to retrieve the Potential of Distribution (PoD). Two PoD values (upon historical values and futureestimations) are smoothly aggregated through a geometricalmean function [27] to finally decide the dissemination action.Our contributions are summarized by the following list: • We provide a monitoring mechanism for detecting themagnitude of the updated synopses; • We deliver a forecasting scheme for estimating the futurerealizations of data synopses; • We describe and analyze an uncertainty driven model fordetecting the appropriate time to distribute data synopsesto peer nodes; • We report on the experimental evaluation of the proposedmodels through a large set of simulations.The paper is organized as follows. Section II presents therelated work while Section III formulates our problem andprovides the main notations adopted in our model. In SectionIV, we present the envisioned mechanism and explain itsfunctionalities. In Section V, we deliver our experimentalevaluation and conclude the paper in Section VI by presentingour future research directions.II. R
ELATED W ORK
A significant research subject in EC is resource manage-ment. It is critical to adopt efficient techniques to resourcesallocation either in the form of scheduling or in the form ofthe allocation of tasks/queries to the appropriate resources. Theultimate goal is to increase the performance and facilitate thedesired processing and timely provide responses. Currently,EC nodes adopt the following models to execute tasks/queries[37]: (i) through an aggregation model where data comingfrom multiple devices can be collected and pre-processedin an edge node [38]. Data are locally processed beforethey are transferred to the Cloud limiting the time for theprovision of the final response; (ii) through a ‘cooperative’model where EC nodes can interact with IoT devices havingprocessing capabilities to offload a subset of tasks [39]. De-vices and EC nodes should exhibit a low distance, otherwise,their interaction may be problematic; (iii) through a ‘cen-tralized’ approach where edge nodes act as execution pointsfor tasks/queries offloaded by IoT devices [31]. EC nodesexhibit higher computational capabilities than IoT devicesand can undertake the responsibility of performing ‘intensive’ tasks, however, under the danger of being overloaded. Currentresearch efforts related to tasks/queries management at ECnodes focus on caching [10], context-aware web browsing[32] and video pre-processing [35]. A number of efforts tryto deal with the resource management problem [6], [13], [34],[37]. Their aim is to address the challenges on how we canoffload various tasks/queries and data to EC nodes takinginto consideration a set of constraints, e.g., time requirements,communication needs, nodes’ performance, the quality of theprovided responses and so on and so forth.The current form of the IoT and EC involves numerousdevices that collect, report and process data. Due to thehuge amount of data, data synopses can be useful into avariety of IoT/EC applications. Synopses depict the ‘high’level description of data and represent their statistics [4].The term ‘synopsis’ usually refers in (i) approximate queryestimation [11]: we try to estimate the responses given thequery. This should be performed in real time. The processingaims to estimate the data that will better ‘match’ to theincoming queries; (ii) approximate join estimation [5], [16]:we try to estimate the size of a join operation that is significantin ‘complex’ operations over the available data; (iii) aggre-gates calculation [12], [15], [17], [23]: the aim is to provideaggregate statistics over the available data; (iv) data miningmechanisms [2], [3], [33]: some services may demand forsynopses instead of the individual data points, e.g., clustering,classification. In any case, the adoption of data synopsesaims at the processing of only a subset of the actual data.Synopses act as ‘representatives’ of data and usually involvesummarizations or the selection of a specific subset [22].These limited representations reduce the need for increasedbandwidth of the network and can be easily transferred in theminimum possible time. Some synopses definition techniquesinvolve sampling [22], load shedding [7], [36], sketching [8],[28] and micro cluster based summarization [2]. Samplingis the easiest one targeting to the probabilistic selection ofa subset of the actual data. Load shedding aims to dropsome data when the system identifies a high load, thus, toavoid bottlenecks. Sketching involves the random projectionof a subset of features that describe the data incorporatingmechanisms for the vertical sampling of the stream. Microclustering targets to the management of the multi-dimensionalaspect of any data stream towards to the processing of the dataevolution over time. Other statistical techniques are histogramsand wavelets [4].III. P
RELIMINARIES AND P ROBLEM D ESCRIPTION
We consider a set of N EC nodes N = { n , n , . . . , n N } ,with their corresponding datasets D = { D , D , . . . , D N } .Every dataset D i = { x j } m j j =1 contains m j real-valued contex-tual multidimensional data vectors x = [ x , x , . . . , x d ] (cid:62) ∈ R d of d dimensions. Every dimension refers to a contextualattribute (e.g., temperature, humidity). Contextual data vectorsare reported by IoT devices that capture them through interac-tion with their environment. Contextual data vectors becomethe basis for knowledge extraction for every n i . An arbitraryethodology is adopted like a regression analysis, classifi-cation tasks, the estimation of multivariate and/or uni-variatehistograms per attribute, non-linear statistical dependenciesbetween input attributes and an application-defined outputattribute, clustering of the contextual vectors, etc. Withoutloss of generality, we assume the online knowledge extractionmodel as the statistical synopsis S . S is represented by l -dimensional vectors, i.e., s = [ s , s , . . . , s l ] (cid:62) ⊂ R l . Astatistical synopsis S i is the summarization of D i located at n i . We obtain N data synopses S , . . . , S N represented viatheir synopsis vectors. Given synopses, EC nodes, initially,are responsible for maintaining their up-to-date synopses as theunderlying data may change (e.g., concept drift). Additionally,EC nodes try to act in a cooperative manner and decide toexchange their synopses regularly. Through this approach, ECnodes can have a view on the statistical properties of datasetspresent in their peers. Based on that, we can gain advantageon preforming decision making for allocating tasks/queries anddeliver analytics taking into consideration the data synopsesdistributed at the EC network.Arguably, there is a trade off between the communicationoverhead and the ‘freshness’ of synopses delivered to peernodes. EC nodes can share up-to-date synopses every timea change in the underlying data is realized at the expense offlooding the network with numerous messages. Recall, that ECnodes are connected with IoT devices that are continuouslyreporting data vectors in high rates. However, in this case,peer nodes enjoy fresh information increasing the performanceof decision making. The other scenario is to postpone thedelivery of synopses, i.e., to reduce the sharing rate expectingless network overhead in light of ‘obsolete’ synopses. In thispaper, we go for the second scenario and try to detect theappropriate time to deliver a synopsis to peer nodes. Thetarget is to optimally limit the messaging overhead. The ideais to let EC nodes to decide the ‘magnitude’ of the collectedstatistical synopsis before they decide a dissemination action.Obviously, there is uncertainty around the amount of magni-tude that should be realized before we fire a disseminationaction. We propose an uncertainty driven mechanism, i.e., outT2FLS that results the PoD upon past synopsis observationsand its estimated values. In any case, EC nodes are forcedto disseminate synopses at pre-defined intervals even if nodelivery decision is the outcome from our model. We have tonotice that, to avoid bottlenecks in the network, we considerthe pre-defined intervals to differ among the group of ECnodes. This ‘simulates’ a load balancing approach avoidingto have too many EC nodes disseminating their synopses atthe same time.Our T2FLS is fed by the most recent S as well as withits future realizations. Every EC node monitors significantchanges in the local synopsis as more contextual data arereceived from IoT devices. Based on this local monitoring,implicitly, we incorporate into the network edge the neces-sary ‘randomness’ in the conclusion of the final decision,thus, potentially avoiding network flooding. The discussed‘randomness’ is enhanced by different data arriving to the available nodes and their autonomous decision making. Such‘randomness’ can assist in limiting the possibility of decidingthe delivery of synopses at the same time, thus, we can limitthe possibility of overloading the network. Let us considerthat at the discrete time instance t a new data vector arrivesin n i . Afterwards, the corresponding synopsis s i should beupdated to conclude the new s ti . Let e t be the difference overthe current, last sent synopsis s i and the new, the updated one, s ti . We call this error/difference as the update quantum , i.e., themagnitude of the difference between s i and s ti . n i calculates e t at consecutive time steps. e t , in a simplistic way, can beconcluded by adopting the sum of differences between twoconsecutive synopsis for every dimension. Obviously, we canadopt any desired synopses realization technique as mentionedabove. e t may be positive or negative, i.e., a new vectorcan increase or decrease the value of each dimension. Forfacilitating our calculations, we are based on the absolutevalue for any difference. EC nodes should delay the deliveryof s ti until they see that a significant difference, i.e., a high magnitude of e t is present. In that time, it is necessaryto have the peer nodes informed about the new status ofthe local dataset. We define the update epoch as the timebetween disseminating two consecutive synopsis updates. Theupdate epoch is realized at pre-defined intervals, T, T, T, . . . ( T > ). To describe our solution, we focus on an individualinterval, e.g., [1 , , . . . , T ] . At each t ∈ [1 , , . . . , T ] , EC nodescheck the last e realizations and feed them into our T2FLS tosee if they excuse the initiation of the dissemination process.This action should be realize till T . If no disseminationdecision is made till T , EC nodes start the dissemination nomatter the observed magnitude. EC nodes also ‘reason’ overthe time series of update quanta { e t } with t = 1 , , . . . , T .EC nodes ‘project’ the time series to the future through theadoption of a forecasting technique. Again, the projection ofupdate quanta is fed into the T2FLS to generate the PoD uponthe future estimations of e . The final goal is to accumulateas much as possible e before we decide the disseminationaction. When the accumulated magnitude is relatively high,EC nodes decide to ‘stop’ the monitoring process, disseminatethe updated synopsis and ‘start off’ a new monitoring/updateepoch.IV. U NCERTAINTY D RIVEN P ROACTIVE S YNOPSES D ISSEMINATION
The Proposed Fuzzy Reasoning Process . For describingthe proposed T2FLS, we borrow the notation of our previousefforts (in other domains) presented in [20], [21]. T2FLSis adopted locally at every node at t by fusing the past e t observations and future e t realizations. e t is adopted as theindication whether the current update quanta significantly devi-ate from their past and future short-term trends. The envisionedfusion of update quanta is achieved through a finite set of Fuzzy Inference Rules (FIRs). FIRs incorporate and ‘combine’past quanta or future estimations (two different processes) toreflect the
P oD . Actually, we ‘fire’ two consecutive timesthe T2FLS for the last three (3) quanta realizations, i.e., t − , e t − , e t and the future three (3) quanta estimations, i.e., e t +1 , e t +2 , e t +3 . Our T2FLS, defines the fuzzy knowledgebase for every n i , e.g., a set of FIRs like: ‘ when the past/futurequanta exhibit a significant difference from the last synopsisdelivery, the P oD for initiating the delivery of the newsynopsis might be also high ’. We rely on Type-2 FL sets as the‘typical’ Type-1 sets and the FIRs defined upon them involveuncertainty due to partial knowledge in representing the outputof the inference [26]. The limitation in a Type-1 FL systemis on handling uncertainty in representing knowledge throughFIRs [18], [26]. In such cases, uncertainty is observed not onlyin the environment, e.g., we classify the
P oD as ‘high’, butalso on the description of the term, e.g., ‘high’, itself. In aT2FLS, membership functions are themselves ‘fuzzy’, whichleads to the definition of FIRs incorporating such uncertainty[26].FIRs refer to a non-linear mapping between three inputs: (i)when focusing on the past quanta, we take into considerationthe following as the inputs into the T2FLS: e t − , e t − , e t ; (ii)when focusing on the future quanta, we take into considerationthe following as the inputs into the T2FLS: e t +1 , e t +2 , e t +3 .The outputs are P oD p & P oD f , respectively. The antecedentpart of FIRs is a (fuzzy) conjunction of inputs and theconsequent part of the FIRs is the P oD indicating the beliefthat an event actually occurs. The proposed FIRs have thefollowing structure: IF e t − is A k AND e e t − is A k AND e t is A k THEN
P oD p is B k , IF e t +1 is A k AND e e t +2 is A k AND e t +3 is A k THEN
P oD f is B k ,where A k , A k , A k and B k are membership functionsfor the k -th FIR mapping e i , e j , e k and P oD m , i ∈{ t − , t + 1 } , j ∈ { t − , t + 2 } , k ∈ { t, t + 3 } and m ∈{ p, f } . For FL sets, we characterize their values through theterms: low , medium , and high . The structure of FIRs in the pro-posed T2FLS involve linguistic terms, e.g., high , representedby two membership functions, i.e., the lower and the upper bounds [25]. For instance, the term ‘ high ’ whose membershipfor x is a number g ( x ) , is represented by two membershipfunctions defining the interval [ g L ( x ) , g U ( x )] . This intervalcorresponds to a lower and an upper membership function g L and g U , respectively (e.g., the membership of x = 0 . can bein the interval [0 . , . ). The interval areas [ g L ( x j ) , g U ( x j )] for each x j reflect the uncertainty in defining the term, e.g.,‘ high ’, useful to determine the exact membership function foreach term. Obviously, if g L ( x ) = g U ( x ) , ∀ x , we obtain a FIRin a Type-1 FL system. The interested reader could refer to[25] for information on reasoning under Type-2 FIRs. Time Series Forecasting . Exponential smoothing [19] isa time series estimation methodology for univariate data.The method can be easily extended to detect the trend orseasonal components on data. We have to notice that weadopt the specific methodology as it is fast, is easily adaptedto frequent changes of data and performs better than othertechniques (e.g., moving average), especially for a short-term time horizon. Exponential smoothing is similar to the weighted sum of past observations, however, it adopts a decayfactor for decreasing weights based on the index of the pastobservation. In our model, we adopt the double exponentialsmoothing for estimating e t +1 , e t +2 , e t +3 based on all theavailable/calculated synopsis quanta. Recall that e t exhibitsthe statistical difference (e.g., the sum) of the current and theprevious disseminated synopsis. Hence, we can rely on theunivariate scenario hiding all the statistics for each individualdimension. This is an ‘abstraction’ strategically adopted inour model. In the first place of our future research plans isthe application of forecasting techniques for each individualdimension, then, aggregating them to derive the final estimatedupdate quanta. Let all the available quanta be e , e , . . . , e t .When applying the double exponential smoothing model, thefollowing equations hold true: v j = α e j +(1 − α )( v j − + b j − ) , b j = β ( v j − v j − ) + (1 − β ) b j − , where v = e & b = e − e . Additionally, α ∈ (0 , is the data smoothingfactor and γ ∈ (0 , is the trend smoothing factor. The methodadopts two smoothing factors to control the decay of data andthe decay of the influence of the change in the trend of data.For performing a forecasting for additional data in the future,we adopt the following equation: v j + k = v j + kb j . Based onthe above, we can easily get e t +1 , e t +2 , e t +3 quanta fed intoour T2FLS to retrieve the P oD f . The Decision Making Mechanism . Our T2FLS is respon-sible to deliver
P oD p and P oD f based on the most recentpast observations for e and the estimated future realizations.Hence, we have to combine the experience of an EC nodefor the update quanta as already recorded with its insighton the future. We propose a simple aggregation process for P oD p and P oD f (to be realized in real time) based onthe geometric mean [27]. The following equation holds true: G ( P oD p , P oD f ) = (cid:16)(cid:81) i =1 P oD i (cid:17) / , with i ∈ { p, f } . Werely on the geometric mean instead of other methodologies asit deals with all the inputs (i.e., our PoD values) and is notaffected by extreme low or high values. Additionally, we wantto incorporate into our decision making a ‘strict’ approach,i.e., when a PoD value is zero then the final outcome is zeroas well. Through this approach we try to be sure about themagnitude of update quanta before we decide to initiate thedissemination action. Finally, when G > theta , we initiate thedissemination action. θ is a pre-defined threshold that ‘dictates’when an EC node should pursue the exchange of synopsis.V. E XPERIMENTAL S ETUP AND E VALUATION
Setup and Performance Metrics . We report on the per-formance of our
Uncertainty Driven Dissemination Model (UDDM) and compare it with other baseline models andschemes proposed in the relevant literature. Initially, we focuson the percentage of T that our model spends till the final de-cision. The φ metric is defined as follows: φ = E (cid:80) (cid:110) t ∗ T (cid:111) Ei =1 where t ∗ is the time when the dissemination actions is decided, E is the number of experiments and i depicts the index ofevery experiment. When φ → means that the proposedmodel spends the entire interval T to conclude a final decision.hen φ → , our model manages to conclude immediatelythe dissemination action. Additionally, we define the metric δ i.e., δ = E (cid:80) (cid:8) | s t ∗ − s | (cid:9) Ei =1 . δ represents the averagemagnitude of the difference between the current and the newsynopses. Through the use of δ , we want to present the abilityof the proposed model to ‘react’ even in limited changes inthe updated synopses (we target a δ → ). The magnitude iscalculated at t ∗ . The ability of the proposed system to avoid theoverloading of the network and limiting the required numberof messages is exposed by ψ . The following equation holdstrue: ψ = T | t ∗ | t ∗∈ [1 ,T ] ( ψ ∈ [0 , T ] ) where | t ∗ | t ∗ ∈ [1 ,T ] representsthe number of times that the model stops in the interval [1 , T ] .When ψ → means that the proposed model stops frequently,thus, multiple messages conveying the calculated synopsesare transferred through the network. When ψ → T meansthat our model does not stop frequently, thus, the calculatedsynopses are delivered after the expiration of the window T .For our experimentation, we adopt the dataset presented inIntel Berkeley Research Lab [14]. It contains measurementsfrom 54 sensors deployed in a lab. We get the availablemeasurements and simulate the provision of context vectors tocalculate the synopses and the update quanta (they are relizedin the interval [0 , ∞ ] ) in a sequential order. We also pursuea comparative assessment for the UDDM with: (i) a baselinemodel (BM) that disseminates synopses when any change isobserved over the incoming data; (ii) the Prediction basedModel (PM) [24]: PM proceeds with the stopping decisiononly when the estimation of the future update quanta violatesa threshold. We perform simulations for E = 1 , and T ∈ { , } . In every experiment, we run the UDDMand get numerical results related to the mean values of theaforementioned metrics (we adopt θ ∈ { . , . } for theUDDM and the PM). Performance Assessment . In Fig. 1, we present our resultsfor the φ metric. We observe that the adoption of a low θ (threshold for deciding the dissemination action) and a low T (deadline to conclude the distribution of synopses) lead to anincreased time for the final decision. Even in that case, therequired time is around 30% of the total deadline T . When θ and T are high, the percentage of T devoted to concludethe dissemination decision is very low. Actually, the proposedsystem manages to deal with the final decision as soon asit detects that update quanta are aggregated over time even insmall amounts. This can be realized in early monitoring roundsdue to the dynamic nature of the incoming data. Recall thatwe adopt a time series that consists of sensory data retrievedby a high number of devices that are, generally, characterizedby their dynamic nature.In Table I, we present our results related to the δ metric,i.e., the update quanta at the time when the disseminationaction is decided. We observe that the UDDM requires a highermagnitude than the BM and the PM before it concludes the dis-semination action. This stands for both experimental scenarios,i.e., θ ∈ { . , . } . In general, there is an increment in δ as T increases. Additionally, the PM exhibits the lowest δ outcome, Fig. 1. Comparative results for the φ metric. i.e., the update quanta for which a dissemination action isdecided. These result present the ‘attitude’ of the proposedmodel to wait and aggregate update quanta in order to alleviatethe network from an increased number of messages. TABLE IE
XPERIMENTAL OUTCOMES FOR THE δ METRIC θ = 0 . θ = 0 . T UDDM BM PM UDDM BM PM
10 16.42 13.87 13.87 15.82 12.05 8.61100 20.95 17.76 17.02 17.60 15.59 13.921,000 19.62 17.68 16.34 20.55 17.79 16.63
Table II depicts our experimental evaluation related to the ψ metric. We observe that the UDDM demands for lessdissemination messages compared with the BM & PM. As T → , , BM and PM exhibit an increased numbermessages. Approximately, they deliver the update quanta atevery monitoring round. The proposed model decides the dis-semination of messages every 2.5 (approximately) monitoringrounds for the experimental scenario where θ = 0 . . When θ = 0 . , we observe an increment in the disseminationactivity, i.e., for every 1.5 monitoring rounds. TABLE IIE
XPERIMENTAL OUTCOMES FOR THE ψ METRIC θ = 0 . θ = 0 . T UDDM BM PM UDDM BM PM
10 2.5 1.42 1.66 1.66 1.66 1.42100 2.5 1.2 1.13 1.47 1.29 1.231,000 2.09 1.2 1.12 1.46 1.24 1.15
VI. C
ONCLUSIONS