Noise Is Useful: Exploiting Data Diversity for Edge Intelligence
aa r X i v : . [ c s . I T ] J a n Noise Is Useful: Exploiting Data Diversity for EdgeIntelligence
Zhi Zeng, Yuan Liu,
Senior Member, IEEE , Weijun Tang,
Member, IEEE , and Fangjiong Chen,
Member, IEEE
Abstract —Edge intelligence requires to fast access distributeddata samples generated by edge devices. The challenge is usinglimited radio resource to acquire massive data samples for train-ing machine learning models at edge server. In this article, wepropose a new communication-efficient edge intelligence schemewhere the most useful data samples are selected to train themodel. Here the usefulness or values of data samples is measuredby data diversity which is defined as the difference between datasamples. We derive a close-form expression of data diversity thatcombines data informativeness and channel quality. Then a jointdata-and-channel diversity aware multiuser scheduling algorithmis proposed. We find that noise is useful for enhancing datadiversity under some conditions.
Index Terms —Data diversity, edge intelligence, machine learn-ing, scheduling.
I. I
NTRODUCTION
With the explosive increase of mobile devices and ubiqui-tous intelligent applications, massive data generated by edgedevices materialize artificial intelligence (AI) or machinelearning at network edge, known as edge AI or edge intelli-gence . However, many mobile devices, like internet-of-things(IoT) nodes, typically have small hardware-sizes and limitedcomputational power. Thus the input data of edge devices areusually transmitted via wireless links to an external computingsystem (i.e., edge server) for processing [1]–[3]. As chipsbecome more and more powerful, the computational power ofedge server can be rapidly increased, wireless communicationbecomes a bottleneck to fast access the distributed data acrossedge devices. Moreover, the wireless transmission of high-dimensional training data from a large number of edge devicesmay congest the air interface due to limited radio resources.To overcome this challenge, it urges efficient wireless datatransmission solutions for edge intelligence [4]–[7].On one hand, the goal of communication is data rate max-imization, in which channel is a “bit-pipe” and data bits haveequal value. However, in machine learning some data are morevalued than the others. Thereby, to achieve communication-efficient edge intelligence, it is necessary that the edge serverselects most valued or useful training data samples by limitedradio resources. The idea of data selection comes from activelearning [8], where most informative data samples are selected Z. Zeng, Y. Liu, and W. Tang are with the School of Electronic andInformation Engineering, South China University of Technology, Guangzhou,510641, P. R. China (email: [email protected]; [email protected];[email protected]). F. Chen is with the School of Electronic and InformationEngineering, South China University of Technology, Guangzhou 510641,China, and also with the Key Laboratory of Marine Environmental SurveyTechnology and Application, Ministry of Natural Resources, Guangzhou510300, China (e-mail: [email protected]). to be labeled (because manual labeling is costly), so as amodel can be accurately trained by fewer labeled data samples.Based on this, the importance of data are differentiated in edgeintelligence system in [9], where the wrongly classified dataand the local models trained by larger datasets are regardedto be more informative, and corresponding radio resourceallocation schemes are designed. In [10], the data samplescloser to the the decision boundary of support vector machine(SVM), i.e., the data with shorter distances to the decisionboundary, are considered to be more informative.On the other hand, noise is harmful in communicationsince it causes decoding error and thus makes communicationunreliable. However, reliability may not matter in machinelearning. For example, when training neural networks, addingnoise can help to avoid overfitting or being trapped in localsolution, and improves training performance [11], [12].In this article, we consider an edge intelligence systemas shown in Fig. 1, consisting one edge server and multipleedge devices. A certain machine learning model is trained atthe edge server by using the data transmitted from the edgedevices. The aim is to enhance the accuracy and generalizationof the model by using fewer radio resources. We propose anew data selection scheme by exploiting data diversity. Thatis, the edge server prefers to selecting the data samples thatare most different from those that have been trained. Wederive an explicit expression of the proposed data diversitymetric, which interweaves the received signal-to-noise ratio(SNR) from communication and data distance from machinelearning. Different from the priori work [9], [10] that rely onmodel downloading at devices to evaluate data samples, in ourscheme the devices only need to know the mean-value of thepreviously trained data samples and thus is model-free. Wereveal that noise is useful under some conditions. Specifically,when the received SNR performance in the edge server isgood, the added noise could enlarge data diversity and improvethe performance of the trained model.The remainder of this article is organized as follows. SectionII describes the system model of edge intelligence. Section IIIpresents the proposed scheme. Section IV provides experimen-tal results and Section V finally concludes this article.II. S
YSTEM M ODEL OF E DGE I NTELLIGENCE
We consider an edge intelligence system including an edgeserver and K edge devices, where the edge devices transmittheir individual labeled data samples to the edge server fortraining a machine learning model. The data sample transmis-sion from the edge devices to the server is based on time-division manner and scheduled by the edge server. Each device has a local dataset containing labeled trainingsamples. Specifically, let ( x k , c k ) denote a labeled data sam-ple of device k , with x k representing the data sample and c k ∈ { , , · · · , C } its corresponding label. We consider anoisy data channel for high-rate data sample transmission anda label channel for corresponding label transmission. The latteris assumed to be noise-less for simplicity. This is reasonablesince a label has a much smaller size than a data sample, e.g.,a label is an integer of ∼ while a data sample is a vector ofmillion coefficients. As time-division transmission is adopted,each slot is used to transmit a data sample of a scheduleddevice. We also assume that the data channel follows block-fading, i.e., the channel remains static within each slot but mayvary from one slot to another. Due to the wireless fading andnoise, the edge server receives biased training data samplessent from the edge devices. Therefore, if edge device k isscheduled to transmit its data sample x k at an arbitrary slot,the received signal at the edge server can be expressed as y k = √ P h k x k + z k , (1)where P is the transmit power, h k is the channel gain fromdevice k to the edge server and z k is the additive whiteGaussian noise (AWGN) vector following the independent andidentically distribution (i.i.d.) CN (0 , σ ) . By multiplying h ∗ k to (1), we can get: h ∗ k y k = √ P h k h ∗ k x k + h ∗ k z k = √ P | h k | x k + h ∗ k z k . (2)Considering analog transmission and maximum-likehooddetection, the edge server decodes the received data sampleas: ˆ x k = 1 √ P R (cid:18) h ∗ k y k | h k | (cid:19) , (3)where the real part of the received signal is extracted, sincethe training data samples are usually real-valued for machinelearning. Thus, only the real part of the noise with σ / affectsthe data sample, and the received SNR for device k is SNR k = 2 Pσ | h k | . (4)III. D ATA -D IVERSITY A WARE M ULTIUSER S CHEDULING
In this section, we derive a joint data-and-channel diversitypolicy for edge intelligence system. Since the radio resourcesare limited, to enable fast learning, it is crucial that the edgeserver schedules most useful data samples so as to ensurehigher accuracy and fast convergence of machine learningmodel training. The policy design lies in a combination ofwireless communication and machine learning, and it needsto take into account the both factors. (cid:40)(cid:71)(cid:74)(cid:72)(cid:3)(cid:54)(cid:72)(cid:85)(cid:89)(cid:72)(cid:85)(cid:40)(cid:71)(cid:74)(cid:72)(cid:3)(cid:39)(cid:72)(cid:89)(cid:76)(cid:70)(cid:72)(cid:86)(cid:47)(cid:82)(cid:70)(cid:68)(cid:79)(cid:3)(cid:39)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)
Fig. 1. System model of data selection in edge intelligence.
A. Diversity Metric
The principle of active learning provides a relation betweendata diversity and model convergence: if highly disparate dataare selectively added to the training set, better performance canbe achieved with fewer data samples. Take image classificationas an example in machine learning, every pixel of an imagebelongs to the same attribute since it is represented by a grayvalue of size ∼ . Thus we can calculate the Euclideandistance between two data samples, and the value of thisdistance represents the difference between two data samples.If the difference between two data samples is larger, it meansthat information redundancy between the two data samples issmaller and thus more informativeness can be obtained fortraining machine learning models.Based on above argument, the data diversity is measured bythe Euclidean distance between data samples. Given any datasamples x and x , their distance can be readily computedby d ( x , x ) = || x − x || . Then the distance based data-diversity measure is defined as d ( x , x ) = || x − x || . (5)However, the measure of data diversity in active learningis for noiseless data and cannot be used directly in edgeintelligence where the received data samples at the edge serverare corrupted by wireless fading and noise.Therefore, the idea of our scheme is described as follows:For transmitting a particular data sample, the transmitter(device) does not know the specific noise experienced bythe transmitted data samples. As the data selection metric iscomputed at each transmitter-side, we consider the statisticalproperties of noise so as to predict how a data sample to bescheduled is affected by noise. To this end, we take expectationover the noise of the received data sample, since only the noiseis random and uncertain for the received signal (1). Note that inour experiments the noise is randomly added to data samples.Specifically, denote x as the central point (or the mean) of Euclidean distance is a measure of the distance between two pointsin Euclidean space, with larger distance indicating more variation betweentwo points. In this paper, we adopt the most popular Euclidean distanceas an example to exhibit our data-diversity-aware scheduling scheme. Othermeasures, such as cosine similarity, Chebyshev distance, KL scatter and soon, can also be used to measure data distances depending on specific learningtasks, and our scheme is applicable for any measures of data distance. ✁✂✂✄☎✆ ✝✄✞✟✠✟✡☎ ☛✡✁☎✝☞✂✌✍✎ k ✏✑✒✑ ✓✑✔✕✖✗ ✘✙ ✒✚✗ ✒✛✑✘✙✘✙✜ ✓✗✒✢✗✙✒✛✑✖ ✕✣✘✙✒ ✣✤ ✒✚✗ ✒✛✑✘✙✗✥ ✥✑✒✑ ✓✑✔✕✖✗✓✦✛✑✙✓✔✘✒✒✗✥ ✥✑✒✑ ✓✑✔✕✖✗✧★✩✗✕✒✗✥ ✛✗✩✗✘✪✗✥ ✥✑✒✑ ✓✑✔✕✖✗✢✫✛✛✗✙✒ ✥✗✩✘✓✘✣✙ ✬✣✫✙✥✑✛✭✮✯✰✱ k ✲ k d ✳ ✴✵✶ ✷ ✶ ✸ k d ✹✺✻✼ k k ✽ k ✾✿ k Fig. 2. An example of the effect of noise. the training data samples that are received at the edge serverin previous slots, then the distance from any data sample ˆ x k to x is given by: d (ˆ x k , x ) = || ˆ x k − x || = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ( h ∗ k y k √ P | h k | ) − x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ( √ P | h k | x k + h ∗ k z k √ P | h k | ) − x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ( x k + h ∗ k z k √ P | h k | ) − x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , (6)where ˆ x k is the decoded data sample defined in (3). By takingthe expectation operation for (6) over the random variable z k ,we have the closed-form expression of the diversity as:( Joint Data-and-Channel Diversity Measure ): E z k (cid:2) d (ˆ x k , x ) (cid:3) = d ( x k , x ) + 1SNR k . (7)Here (7) shows that channel decay and noise can affect thedata diversity, which are reflected together by the term of SNRand can increase the data diversity. The effect of noise on thedata diversity can be further illustrated in Fig. 2, where a SVMclassifier is adopted as an example. It can be observed that thenoise provides randomness to data samples and enlarges datadiversity, which is benefit to the generalization ability of thetrained model. But if the noise is too strong, the transmitteddata sample may be far away from its noise-less position andbecomes a misclassified sample. Our result is also of greatpractical significance. Enlarging data-diversity is one of thepromising methods to improve the performance of a learningmodel. For example, in computer vision, the original trainingsamples are often manually modified via rotation, flipping andmany other transformations to enlarge the training dataset. Inthis article, we exploit the inherent received noise in wirelesscommunications to enlarge the dataset and data-diversity. Algorithm 1
Data-Diversity Aware Multiuser Scheduling initialize The received data sample set D and x . repeat (Central Point Broadcasting): The server broadcasts x to all devices; (Diversity Measure): Each device k calculates and thenuploads the measure I k to the server; (Transmission Scheduling): The server selects the de-vice k ∗ with the maximum I k for transmitting data x ∗ k ∗ ; (Updating): D ← D ∪ ˆ x ∗ k ∗ ; x ← |D| P x ∈D x ;Train a new model by using dataset D ; until Model converges or transmission budget exhausts.
B. Multiuser Scheduling
The diversity expression (7) combines both communicationand machine learning to reveal the usefulness of a data samplefor machine learning. Thus, at each transmission, each device k prefers to selecting one data sample from its local dataset toachieve the maximum diversity in (7), in which the best datasample of each device k is denoted as x ∗ k . Then, each device k uploads its diversity measure I k to the edge server: I k = 1SNR k + d ( x ∗ k , x ) . (8)After receiving the measures I k ’s from all the devices, theedge server has the following scheduling policy.( Joint Data-and-Channel Diversity Scheduling ): Ateach transmission time, the edge server schedules device k ∗ to upload a data sample if k ∗ = argmax k (cid:26) k + d ( x ∗ k , x ) (cid:27) . (9)Finally, we formally describe the whole scheme in Algo-rithm 1. At the first step, the server broadcasts the centralpoint x of the set of the trained data samples D . Then, eachdevice k calculates its diversity measure I k by selecting onebest data sample from its local dataset, according to (8). Theserver schedules one best device k ∗ that has largest I k andupdates the central point x . The above steps are iterated untilthe model converges or transmission budget exhausts.IV. E XPERIMENTAL R ESULTS
In this section, we evaluate the proposed scheme via exper-iments.
A. Experimental Settings
We adopt SVM as an example of the machine learningmodel for the purpose of illustration. Note that the proposedscheme is applicable for any algorithms in machine learning.In order to reduce the consumption of wireless communication resources, binary soft-margin SVM model is used. Initially, webuild an original classifier model with some initial data sam-ples that are already on the server before collecting wirelessdata samples distributed across the edge devices. As more andmore training data samples are uploaded to the edge server bythe edge devices, the classifier model is gradually corrected,and its ability to classify correctly continues to improve.We consider edge devices unless specified otherwise. Thewireless fading h k ’s are assumed to be Rayleigh fading. Weuse the well-known MNIST dataset of handwritten digits totrain the SVM classifier, which consists of two parts: a trainingset containing , samples and a test set containing , samples, and each set comprises data and labels. Each data inthe MNIST data set is a gray image of × pixels, whichmeans that the dimension of a data is , correspondingto columns in the data matrix, and each row is a grayimage. The content of these data are handwritten numbers0 ∼
9, and these categories correspond to the columnsof the label matrix respectively, while each row representsthe corresponding image located in the same row of the datamatrix. In every row, only one column that the categorybelongs to is marked as , and the others are marked as . Inorder to highlight the results of the experiment, we select twocategories, and , from the entire dataset, which included atotal of , training samples and , test samples. Theinitial model is constructed from a small number of samplesstored in the edge server. The remaining training data arerandomly and uniformly distributed on the edge devices tobuild the local datasets.For comparison, we also investigates three benchmarks.The first benchmark only considers data diversity and ig-nores the communication factor, i.e., the device schedulingpolicy is k ∗ = arg max k d ( x ∗ k , x ) . The second benchmarkis based on multiuser diversity, where each transmissionschedules a device with the maximum channel gain, i.e., k ∗ = arg max k | h k | . The last one is random scheduling,i.e., at each slot, the server randomly schedules a device thatrandomly selects one data sample. B. Learning Performance1) Convergence Rate:
In Fig. 3, we investigate the learningperformance of the three schemes, where the transmit SNR
P/σ is set to be dB for all devices. The total transmissionbudget is set as . We can see that the proposed scheme issignificantly better than the other three benchmarks. Thoughthe three benchmark schemes are able to converge, they requiremuch more transmission resources than the proposed scheme,e.g., the first two benchmark schemes consume and times of resources than the proposed scheme, respectively,when the test accuracy is set as . in this example. Incontrast, the random scheduling scheme can not achieve upto . test accuracy. This confirms that the proposed schemeexploiting both data diversity and channel quality results inrapid convergence of the model. It validates the effectivenessof the proposed solution for fast learning.
2) Transmit SNR:
To check the robustness to channelconditions, the all schemes are tested at different transmit SNR
Numbers of training data samples T e s t A cc u r a cy ProposedChannel-diversity onlyData-diversity onlyRandom Scheduling
Fig. 3. Test accuracy versus transmission budget.
Transmit SNR (dB) T e s t A cc u r a cy ProposedChannel-diversity onlyData-diversity onlyRandom scheduling
Fig. 4. Test accuracy versus the average transmit SNR.
Numbers of users T e s t A cc u r a cy ProposedChannel-diversity onlyData-diversity onlyRandom Scheduling
Fig. 5. Test accuracy versus the numbers of users.
Numbers of training data samples T e s t A cc u r a cy Proposed, IIDProposed, non-IID
Fig. 6. Test accuracy versus the transmission budget. and the results are shown in Fig. 4, where the transmissionbudget is fixed as . It can be observed that the test accu-racy of the proposed scheme is lower than three benchmarkschemes at low SNR. However, as the transmit SNR increases,the test accuracy gradually increases and eventually exceedsthe other schemes. This is consistent to our above analysis thatlarge noise (or low SNR) will make the received data samplesto be a wide range of deviations from the expected data sam-ples so that the model is trained by the wrong data samples.But, small noise allows the server to obtain data sampleswith larger diversity and thus accelerate model convergence.Moreover, as the transmit SNR improves, the test accuracy ofthe benchmark scheme with data-diversity only exceeds thechannel-aware scheduling scheme, eventually approaching theproposed scheme. Moreover, the random scheduling schemehas the worst performance regardless of the SNR regimes.
3) Multiuser Diversity:
We also analyze the user/channeldiversity gain by plotting the test accuracy for different num-bers of devices, as shown in Fig. 5, where the transmissionbudget is fixed as and the transmit SNR is fixed as dB.The performance of proposed scheme outperforms the threebenchmarks. When the number of users is small, there areless training data samples in the edge devices that the servercan select. As more devices enter the system, both data andchannel diversities that are available to the system increases.It is noted that the performance of the random schedulingscheme can not explore the data and channel diversities sinceboth data and user are randomly selected.
4) Non-IID Data Distribution:
Data imbalance or non-IIDdata distribution problem is usually encountered in machinelearning. To verify the performance of the proposed data di-versity aware scheme in non-IID case, we conduct experimentsand the results are show in Fig. 6, where we can observe thatthe performance of the non-IID case is slightly worse thanthat of the IID case. This also shows the robustness of ourproposed data-diversity aware scheme. This is because theproposed scheme aims at selecting the most different data sample (compared with the average point of the trained datasamples at the server) instead of a certain class of data ineach iteration, the uneven data distribution on devices doesnot fundamentally affect the selection. Therefore, our proposedscheme is robust under the non-IID data distribution.V. C
ONCLUDING R EMARKS
In this article, we proposed a new scheduling scheme thatexploits data diversity besides communication reliability. Theproposed scheme selects the most useful data samples mea-sured by data diversity for model training so as to acceleratethe training process. The proposed scheme can be extended tomore sophisticated scenarios of wireless communication.R
EFERENCES[1] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A surveyon mobile edge computing: The communication perspective,”
IEEECommunications Surveys Tutorials , vol. 19, no. 4, pp. 2322–2358, 2017.[2] M. Liu and Y. Liu, “Price-based distributed offloading for mobile-edge computing with computation capacity constraints,”
IEEE WirelessCommunications Letters , vol. 7, no. 3, pp. 420–423, 2018.[3] Z. Liang, Y. Liu, T. Lok, and K. Huang, “Multiuser computationoffloading and downloading for edge computing with virtualization,”
IEEE Transactions on Wireless Communications , vol. 18, no. 9, pp.4298–4311, 2019.[4] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless networkintelligence at the edge,”
Proceedings of the IEEE , vol. 107, no. 11, pp.2204–2239, 2019.[5] J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, and T. Melodia,“Machine learning for wireless communications in the internet ofthings: A comprehensive survey,”
CoRR , vol. abs/1901.07947, 2019.[Online]. Available: http://arxiv.org/abs/1901.07947[6] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neuralnetworks-based machine learning for wireless networks: A tutorial,”
IEEE Communications Surveys Tutorials , vol. 21, no. 4, pp. 3039–3071,2019.[7] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edgeintelligence: Paving the last mile of artificial intelligence with edgecomputing,”
Proceedings of the IEEE , vol. 107, no. 8, pp. 1738–1762,2019.[8] B. Settles, “Active learning,”
Synthesis Lectures on Artificial Intelligenceand Machine Learning , vol. 6, no. 1, pp. 1–114, 2012. [Online].Available: https://doi.org/10.2200/S00429ED1V01Y201207AIM018[9] Y. Liu, Z. Zeng, W. Tang, and F. Chen, “Data-importance aware radioresource allocation: Wireless communication helps machine learning,”
IEEE Communications Letters , vol. 24, no. 9, pp. 1981–1985, 2020.[10] D. Liu, G. Zhu, J. Zhang, and K. Huang, “Data-importance aware userscheduling for communication-efficient edge machine learning,”
IEEETransactions on Cognitive Communications and Networking , pp. 1–1,2020.[11] Y. Jiang, R. M. Zur, L. L. Pesce, and K. Drukker, “A study of theeffect of noise injection on the training of artificial neural networks,”in , 2009, pp.1428–1432.[12] A. Neelakantan, L. Vilnis, Q. V. Le, I. Sutskever, L. Kaiser, K. Kurach,and J. Martens, “Adding gradient noise improves learning for very deepnetworks,” 2015. (cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87)(cid:3)(cid:71)(cid:72)(cid:70)(cid:76)(cid:86)(cid:76)(cid:82)(cid:81)(cid:3)(cid:69)(cid:82)(cid:88)(cid:81)(cid:71)(cid:68)(cid:85)(cid:92) (cid:214)(cid:91) (cid:39)(cid:68)(cid:87)(cid:68)(cid:3)(cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:3)(cid:76)(cid:81)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)(cid:3)(cid:86)(cid:72)(cid:87)(cid:38)(cid:72)(cid:81)(cid:87)(cid:85)(cid:68)(cid:79)(cid:3)(cid:83)(cid:82)(cid:76)(cid:81)(cid:87)(cid:3)(cid:82)(cid:73)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71)(cid:3)(cid:71)(cid:68)(cid:87)(cid:68)(cid:3)(cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:86)(cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:80)(cid:76)(cid:87)(cid:87)(cid:72)(cid:71)(cid:3)(cid:71)(cid:68)(cid:87)(cid:68)(cid:3)(cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:40)(cid:91)(cid:70)(cid:72)(cid:83)(cid:87)(cid:72)(cid:71)(cid:3)(cid:85)(cid:72)(cid:70)(cid:72)(cid:76)(cid:89)(cid:72)(cid:71)(cid:3)(cid:71)(cid:68)(cid:87)(cid:68)(cid:3)(cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:38)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87)(cid:3)(cid:71)(cid:72)(cid:70)(cid:76)(cid:86)(cid:76)(cid:82)(cid:81)(cid:3)(cid:69)(cid:82)(cid:88)(cid:81)(cid:71)(cid:68)(cid:85)(cid:92) (cid:20)(cid:54)(cid:49)(cid:53) k (cid:19) (cid:11) (cid:91) (cid:15) (cid:91) (cid:12) k d (cid:19) (cid:91) (cid:19) (cid:11) (cid:91) (cid:15) (cid:91) (cid:12) (cid:20)(cid:54)(cid:49)(cid:53) (cid:91) (cid:19) (cid:91) (cid:91) k (cid:214)(cid:91) k (cid:0)✁✂✂✄☎✆ ✝✄✞✟✠✟✡☎ ☛✡✁☎✝☞✂✌✍✎ k ✏✑✒✑ ✓✑✔✕✖✗ ✘✙ ✒✚✗ ✒✛✑✘✙✘✙✜ ✓✗✒✢✗✙✒✛✑✖ ✕✣✘✙✒ ✣✤ ✒✚✗ ✒✛✑✘✙✗✥ ✥✑✒✑ ✓✑✔✕✖✗✓✦✛✑✙✓✔✘✒✒✗✥ ✥✑✒✑ ✓✑✔✕✖✗✧★✩✗✕✒✗✥ ✛✗✩✗✘✪✗✥ ✥✑✒✑ ✓✑✔✕✖✗✢✫✛✛✗✙✒ ✥✗✩✘✓✘✣✙ ✬✣✫✙✥✑✛✭✮✯✰✱ k ✲ k d ✳ ✴✵✶ ✷ ✶ ✸ k d ✹✺✻✼ k k ✽ k ✾✿✾✿