Moving Object Classification with a Sub-6 GHz Massive MIMO Array using Real Data
B. R. Manoj, Guoda Tian, Sara Gunnarsson, Fredrik Tufvesson, Erik G. Larsson
aa r X i v : . [ c s . I T ] F e b MOVING OBJECT CLASSIFICATION WITH A SUB-6 GHZ MASSIVE MIMO ARRAY USINGREAL DATA
B. R. Manoj † , Guoda Tian ∗ , Sara Gunnarsson ∗ , Fredrik Tufvesson ∗ , Erik G. Larsson †† Department of Electrical Engineering (ISY), Link¨oping University, Link¨oping, Sweden ∗ Department of Electrical and Information Technology, Lund University, Lund, Sweden
Emails: { manoj.banugondi.rajashekara, erik.g.larsson } @liu.se, { guoda.tian, sara.gunnarsson, fredrik.tufvesson } @eit.lth.se ABSTRACT
Classification between different activities in an indoor envi-ronment using wireless signals is an emerging technology forvarious applications, including intrusion detection, patientcare, and smart home. Researchers have shown differentmethods to classify activities and their potential benefits byutilizing WiFi signals. In this paper, we analyze classifica-tion of moving objects by employing machine learning onreal data from a massive multi-input-multi-output (MIMO)system in an indoor environment. We conduct measurementsfor different activities in both line-of-sight and non line-of-sight scenarios with a massive MIMO testbed operating at3.7 GHz. We propose algorithms to exploit amplitude andphase-based features classification task. For the consideredsetup, we benchmark the classification performance and showthat we can achieve up to 98% accuracy using real massiveMIMO data, even with a small number of experiments. Fur-thermore, we demonstrate the gain in performance resultswith a massive MIMO system as compared with that of alimited number of antennas such as in WiFi devices.
Index Terms — Activity sensing, massive MIMO, ma-chine learning, moving objects classification.
1. INTRODUCTION
Wireless-based activity sensing technology is evolving rapidlydue to the interest in many day-to-day applications such asintrusion detection, patient care, etc., [1] without requiringcameras, motion sensors, or radars [2]. The potential of utiliz-ing wireless signals for activity sensing has been shown for avariety of applications such as non line-of-sight (NLOS)/line-of-sight (LOS) identification [3, 4], motion detection [5],human presence detection [6, 7], classification of human ac-tivities [2, 8, 9] and localization [10]. These can be realizedindoors by measuring channel state information (CSI) fromalready available WiFi devices [5]–[9]. The machine learning(ML)-based approaches have shown success and been widelyused in numerous applications due to their ability to learnthe statistical patterns from the channel information. A fewexamples of applications are human activities recognitionusing a hidden Markov model [2], NLOS/LOS identificationfor multiple-input-multiple-output (MIMO) systems [3, 4],massive MIMO-based indoor positioning [11, 12], and radio
This work was supported in part by ELLIIT, Security-Link and a grantfrom Ericsson AB. identification [13]. Although ML approaches have shownsuccess in WiFi-based activity sensing as well, the perfor-mance is limited since the devices typically are equippedwith two or three antennas. This limitation imposes difficul-ties in exploiting statistical patterns in the spatial domain anddegraded accuracy especially in NLOS conditions.In this paper, we aim to overcome these limitations byusing real data from a massive MIMO system to exploit theadvantages of the spatial domain to efficiently classify mov-ing objects in LOS and NLOS scenarios by using ML models.To the best of our knowledge, this has not yet been addressedin the literature. To exploit the multiple antennas, we uti-lize the information from correlation changes across space asthe feature for activity detection. The main contributions ofthis paper are: (i) We propose algorithms to extract featuresfrom the amplitude and phase information of the measuredmassive MIMO data. The measurements include LOS andNLOS propagation scenarios for both static and dynamic en-vironments. In the literature, it is often only the amplitudeinformation that is considered. A few methods have been pro-posed to also exploit phase information, such as linear trans-formation and phase difference between the antennas [3]. Dif-ferent from existing techniques, we consider relative phasechanges by applying a linear regression method of phase tra-jectories to obtain the error variance and then using its cor-relation changes across space as the phase-based feature. (ii)We compare two different ML methods for classifying differ-ent moving objects, namely support vector machine (SVM)and feedforward neural network (NN) with fully connectedlayers.
2. MOVING OBJECT CLASSIFICATION
We consider an uplink narrow band massive MIMO orthog-onal frequency division multiplexing (OFDM) system withmultiple user equipments (UEs). The received signal matrix Y f ∈ C M × N for all antennas M and snapshots N can, foreach subcarrier f , be described as Y f = H f ⊙ Γ f + N f , (1)where subcarrier f ∈ [1 , F ] , radio-frequency (RF)-chain m ∈ [1 , M ] , snapshot n ∈ [1 , N ] , ⊙ is the Hadamard product and H f ∈ C M × N is the complex-valued channel matrix. It isdifficult to precisely model H f due to unknown positions ofthe UEs and the environment as well as unknown Dopplershifts caused by unpredictable speeds and directions of dif-erent moving objects. Furthermore, we denote the frequencyresponse of the RF chains as Γ f ∈ C M × N with each ele-ment defined as Γ f ( m, n ) = d m e j ( α m − n ǫ m,f ) , where d m , α m , ǫ m,f represent the amplitude scaling, initial phase offset,and carrier-frequency-offset (CFO) for the f -th subcarrier, re-spectively, for the m -th RF chain. The noise for the f -th sub-carrier is denoted as N f ∈ C M × N . Finally, for a total of F subcarriers, we define Y ∈ C F × M × N to represent the re-ceived data. Since we have limitations in the signal model,it is challenging to apply traditional detection and estimationtheory for the considered classification task. Thus, we are mo-tivated to exploit the statistical features that can be efficientlyused by ML models to accurately classify the moving objects. For many wireless-based applications, the ML models can op-erate with raw I/Q samples. However, this requires a hugedata set for training the models, which in turn increases hard-ware requirements and computational complexity. Instead ofusing the raw I/Q samples as input, we exploit features from areal data set to reduce the dimensionality, which significantlyreduces the training data set. In this section, we propose algo-rithms to extract these features from the amplitude and phaseinformation. Before this, the first step is to apply linear inter-polation on the data to overcome potential problems of sam-pling jitter during measurements [6], i.e., to obtain evenly-spaced samples.
After the interpolation step, to eliminate random noise presentin the data, we use the discrete wavelet-based denoisingmethod [6, 8]. Similar to [6], we use a 2-level wavelet trans-form on the amplitude of the data of all subcarriers acrossthe snapshots for each antenna. We then apply the principleof Stein’s unbiased risk estimate thresholding to the high-frequency coefficients to filter out the noisy part [6]. Wedenote the -dimensional data after processing with noise fil-tering as |Z| ∈ R F × M × N . In the literature, researchers haveutilized various statistical features in the time and frequencydomain for activity detection [7, 8]. Different from previousworks, we exploit correlation changes not only across timeand frequency, but also space, which occur during the objectmovement. We further utilize principal component analysis(PCA) to track these changes. For this, we choose a timewindow ( T w ) of second, to capture the small movementchanges of the object in the area of interest, such that we ac-curately can estimate the time-varying correlation across thefrequency and space in T w . Algorithm , presents our pro-posed method for amplitude-based feature extraction. In thealgorithm, we discard the first eigenvalue since it has beenobserved that its value is high for both static and dynamicevents, making it difficult to differentiate them. The phase information is more sensitive for different types ofmoving objects, thus, playing an important role to improve
Algorithm 1
Amplitude-based feature extraction
Input: |Z| ∈ R F × M × N and T w , Output:
Amplitude feature, A for n = 1 to N do Define B n ∈ R F × M , as the matrix obtained from |Z| at time n : B n = [ b n (1) , . . . , b n ( F )] T , b n ( f ) ∈ R M , f ∈ [1 , F ] . end Define D ∈ R FM × N , as the matrix obtained from vectorizing the matrices B n , n ∈ [1 , N ] : D = [vec( B ) , vec( B ) , . . . , vec( B N )] . for j = 1 to N/T w do E = [ D (1) , D (2) , . . . , D ( T w )] , E ∈ R FM × Tw , D ( i ) ∈ R FM × , i ∈ [1 , T w ] .Determine the inner product: S = E T E , S ∈ R Tw × Tw .Perform eigenvalue decomposition: S = UΣU T . g j = Sort the eigenvalues in the descending order. Discard the first eigenvalue andstore the rest.Slide T w in D and repeat the calculations of S and Σ . end G = [ g , g , . . . , g N/Tw ] T . A = E [ G ] , where E [ · ] is the expectation operator. Time (s) -200020040060080010001200 U n w r apped pha s e (r ad ) and li nea r r eg r e ss i on Linear regression for unwrapped phase
Waving balloon: Phase Waving balloon: Regression Human dancing: Phase Human dancing: Regression Spinning and moving wheel: Phase Spinning and moving wheel: Regression Spinning bike wheel: Phase Spinning bike wheel: RegressionStatic environment: PhaseStatic environment: Regression
Fig. 1 . Unwrapped phase and linear regression for f = 30 and m = 1 .classification accuracy. However, for a static scenario, onlyminor phase changes are expected due to measurement noise.For dynamic events, the phase of H f changes more rapidlyacross the snapshots due to Doppler shifts. Furthermore, thephase of Γ f ( m, n ) increases or decreases linearly with snap-shots due to the CFO across the subcarriers. This phase com-ponent contributes to the raw unwrapped phase of Y , whichvaries around the linear regression line. We, therefore, firstperform linear regression on the unwrapped phase of Y foreach f and m across all snapshots. The unwrapped phaseand the linear regression w.r.t. the observation time of dif-ferent events for f = 30 and m = 1 are demonstrated inFig. 1. Since the performance of the CFO and timing estima-tion varies, the CFOs, after compensation between the UE andBS, could be different between each sample, resulting in non-identical slopes as shown in Fig. 1. We denote ˆ Y as the un-wrapped phase of Y and ˆ y f,m ∈ R N as the vector represent-ing the phase snapshot obtained from ˆ Y for the f -th subcarrierand m -th RF chain. We define ξ = [1 , , ..., N ] T ∈ R N asthe indexes of the snapshots, N ∈ R N as the column unitvector, and Ξ ∈ R N × as Ξ = [ N , ξ ] . Algorithm presentsour proposed method for the phase-based feature extraction.In the algorithm, β f,m ∈ R is the linear regression vector, lgorithm 2 Phased-based feature extraction
Input:
Unwrapped phase ˆ Y ∈ R F × M × N , Output:
Phase feature, P for m = 1 to M dofor f = 1 to F do Linear regression for each f and m : β f,m = ( Ξ T Ξ ) − Ξ T ˆ y f,m .Define η f,m ∈ R N as the deviation between ˆ y f,m and the regression line: η f,m = ˆ y f,m − β f,m (2) · ξ − β f,m (1) · N Define q f,m as the variance of η f,m : q f,m = var( η f,m ) endend Define Q ∈ R F × M , where the f -th row and m -th column of Q is q f,m .Calculate the pairwise column correlation of Q : ˜ S ∈ R M × M .Perform eigenvalue decomposition: ˜ S = ˜ U ˜ Σ ˜ U T .Sort the eigenvalues in the descending order and discard the first eigenvalue. The restare stored in P as the phase-based features. obtained using least squares method [14], of ˆ y f,m . We describe the ML models SVM and feedforward NN thatis applied for the classification task by leveraging the featuresextracted from the proposed algorithms. We denote the clas-sifier model as f ( x ) : x ∈ X → c ∈ C , where the input x = [ A , P ] is a combination of the aforementioned amplitudeand phase-based features while c is the predicted output label.The training data set is defined as { x ( t ) , c ( t ) } N T t =1 , where N T is the size of data set; the ML models are trained with thisdata set to learn the relation between the input features andthe output label. The details of the considered ML models areas follows (i) SVM: This is a classical supervised ML modelused for classification tasks. It is widely used since it requiresa few predefined parameters and uses a simple optimizationproblem to learn the weights of the model. We implementour algorithm using the SVM model by utilizing the package sklearn [15] with the kernel type linear . Since it is diffi-cult to design the kernel type that models the nonlinearity inthe data set, we resort to the standard linear kernel. (ii) Feed-forward NN: In comparison to SVM, the NN transforms theinput through linear mapping and then by using the nonlin-ear activation function. Due to this nature of NN, it makesit efficient to learn the nonlinearity in the data set comparedto classical methods. We consider a simple feedforward ar-chitecture with fully connected layers consisting of an inputlayer, a few hidden layers, and an output layer. The detaileddescription of NN architecture that is proposed is given in Ta-ble 1. For this architecture, we have considered the Adamoptimizer and the loss function as categorical cross-entropy,and model parameters have been designed such that the net-work performs efficiently. The maximum input dimension tothe SVM and NN architectures, shown in Table 1, is set to ,as the second to seventh largest eigenvalues of the amplitude-and phase-based features. Through our experiments, we haveobserved that a further increase in the input dimension doesnot improve the learning ability of the ML models.
3. MASSIVE MIMO MEASUREMENT SETUP
A measurement campaign has been carried out in an indoorlaboratory environment at Lund University, to capture sam-
Table 1 . NN architecture with trainable parameters of , . Size Parameters Activation functionInput: [ A , P ] 12 - -Layer 1 (Dense) 64 832 eluLayer 2 (Dense) 32 2080 eluLayer 3 (Dense) 16 528 eluLayer 4 (Dense) 8 136 eluLayer 5 (Dense) 2 18 softmax Fig. 2 . Map over the scenarios. For the LOS scenario, theUEs are placed in LOS and ; for the NLOS scenario, theyare placed in NLOS . In case of a dynamic environment, theactivity takes place in the “Event” box.ples under both LOS and NLOS scenarios. The measure-ment setup is illustrated in Fig. 2, where the UEs and theBS are static and placed in the same room in the LOS sce-nario, while separated by a wall in the NLOS scenario. Foreach of the scenarios, four different dynamic events were per-formed, namely, waving an aluminium foil balloon, spinninga bike wheel, spinning and moving a bike wheel, and humandancing. Samples were also collected in static environments.In a given scenario, for each of the static and dynamic events,18 experiments were conducted, resulting in a total of 90 ex-periments.The BS is the Lund University massive MIMO testbed(LuMaMi) [16], which is a software-defined radio-basedtestbed that operates in OFDM-mode with antennasconnected to transceiver chains at a carrier frequencyof . GHz with MHz of bandwidth. The antenna ele-ments are separated half a wavelength apart and arrangedin four rows of elements each; counting row-wise theodd-numbered elements are vertically polarized and theeven-numbered are horizontally polarized. The UEs con-sist of an universal software radio peripheral (USRP) withtwo transceiver chains and are equipped with either one ortwo dipole antennas. For each measurement and active UEtransceiver chain, frequency points, and snapshotsover seconds were collected, this constituting one experi-ment.
4. RESULTS AND DISCUSSION
In this section, we evaluate the classification performance us-ing the proposed algorithms for feature extraction from themassive MIMO measurements. We denote the measurementevents as v : static, v : human dancing, v : spinning bikewheel, v : waving of an aluminium foil balloon, and v : spin-ning and moving bike wheel. We consider the classificationproblems as depicted in Table 2. The extracted features fromthe events are labelled as shown in the third column of Ta-ble 2. In this table, Case 1 depicts the scenario of classifyingbetween static and dynamic environments, Case 2 between able 2 . Classification of different moving objects. Cases Classifications Labels1 Between { v , v , v , v } and { v } { v , v , v , v } → ‘1’ { v } → ‘0’2 Between { v } and { v } { v } → ‘1’ { v } → ‘0’3 Between { v } and { v , v , v } { v } → ‘1’ { v , v , v } → ‘0’ Table 3 . Confusion matrices for Case 1 when M = 100 . LOSusingSVM { v } { v ,v ,v ,v } NLOSusingSVM { v } { v ,v ,v ,v }{ v } { v } { v ,v ,v ,v } { v ,v ,v ,v } { v } { v ,v ,v ,v } NLOSusingNN { v } { v ,v ,v ,v }{ v } { v } { v ,v ,v ,v } { v ,v ,v ,v } a human dancing activity and one non-human moving objectand Case 3 is similar to Case 2, besides, we consider all non-human moving objects. The classification accuracy when thenumber of antennas M = 100 and M = 3 for Case 1–3 areshown in Figs. 3(a) and 3(b), respectively. For training andtesting (train:test) the ML models, we consider (80%:20%),(70%:30%), and (80%:20%) for Cases 1, 2, and 3, respec-tively. The reason for choosing different values is due to thedifferent sizes of the data sets. The objective of this paper isto showcase the potential of using massive MIMO for movingobjects classification; thus, it has to be noted that the resultspresented here are preliminary as the number of experimentsis small, though from Fig. 3(a) the results seem to be promis-ing, further investigation with more diverse measurements isneeded.For Case 1–3, we have presented the confusion matricesin Table 3–5 when M = 100 . In the tables, the diagonal en-tries represent the number of samples that are correctly classi-fied whereas off-diagonal entries represent the number of mis-classified samples. From Fig. 3(b), for both LOS and NLOSscenarios, the NN outperforms the SVM based method, sinceSVM is not able to capture the nonlinearity in the data setfor different dynamic activities. The careful selection of hy-perparameters and hidden layers for linear mapping and thenthe nonlinear activation function in NN enables the modelto learn the input and output relationship better, thus, NNachieves good classification accuracy. In Table 1, we haveshown the number of trainable parameters assuming the up-per limit on the input dimension, i.e., x = [6 , . We haveobserved through empirical experiments that for Case 2, weneed only x = [2 , . For comparison, the classification accu-racy results for M = 3 is depicted in Fig. 3(b). As expected,it can be observed that the case with M = 100 outperforms M = 3 , since the spatial correlation changes are exploitedbetter with the large antenna array. We have also presentedthe confusion matrices in Table 6–8 when M = 3 for Case1–3. From the confusion matrices, it is evident that the proba-bility of misclassification of the events is higher when M = 3 as compared to M = 100 .
5. CONCLUSION
We have presented machine-learning algorithms for the clas-sification of human and non-human activities using wireless
Moving object classification when M = 100
LOS:SVM LOS:NN NLOS:SVM NLOS:NN
Scenarios and ML models C l a ss i f i c a t i on a cc u r a cy Case 1Case 2Case 3 (a)
Moving object classification when M = 3
LOS:SVM LOS:NN NLOS:SVM NLOS:NN
Scenarios and ML models C l a ss i f i c a t i on a cc u r a cy Case 1Case 2Case 3 (b)
Fig. 3 . Classification accuracy of Case 1–3 in LOS and NLOSscenarios using SVM and NN models for (a) M = 100 and(b) M = 3 . Table 4 . Confusion matrices for Case 2 when M = 100 . LOSusingSVM { v } { v } NLOSusingSVM { v } { v }{ v } { v } { v } { v } { v } { v } NLOSusingNN { v } { v }{ v } { v } { v } { v } Table 5 . Confusion matrices for Case 3 when M = 100 . LOSusingSVM { v ,v ,v } { v } NLOSusingSVM { v ,v ,v } { v }{ v ,v ,v }
11 0 { v ,v ,v }
11 0 { v } { v } { v ,v ,v } { v } NLOSusingNN { v ,v ,v } { v }{ v ,v ,v }
10 1 { v ,v ,v }
11 0 { v } { v } Table 6 . Confusion matrices for Case 1 when M = 3 . LOSusingSVM { v } { v ,v ,v ,v } NLOSusingSVM { v } { v ,v ,v ,v }{ v } { v } { v ,v ,v ,v } { v ,v ,v ,v } { v } { v ,v ,v ,v } NLOSusingNN { v } { v ,v ,v ,v }{ v } { v } { v ,v ,v ,v } { v ,v ,v ,v } Table 7 . Confusion matrices for Case 2 when M = 3 . LOSusingSVM { v } { v } NLOSusingSVM { v } { v }{ v } { v } { v } { v } { v } { v } NLOSusingNN { v } { v }{ v } { v } { v } { v } Table 8 . Confusion matrices for Case 3 when M = 3 . LOSusingSVM { v ,v ,v } { v } NLOSusingSVM { v ,v ,v } { v }{ v ,v ,v }
11 0 { v ,v ,v }
11 0 { v } { v } { v ,v ,v } { v } NLOSusingNN { v ,v ,v } { v }{ v ,v ,v }
10 1 { v ,v ,v }
11 0 { v } { v } signals received at a massive MIMO base station. We testedthe methods on data obtained from a measurement campaignconducted indoors, using the -antenna LuMaMi massiveMIMO testbed operating at . GHz carrier frequency [16].Experiments were conducted both in LOS and NLOS scenar-ios as well as in both static and dynamic environments. In theexperimental tests, our proposed algorithms could success-fully distinguish between human (e.g. a person dancing) andnon-human (e.g. a spinning bike wheel) activities. Further-more, the classification performance when using all M = 100 antennas at the base station was significantly better comparedto when using only M = 3 antennas (a small subset of thearray). This suggests that the spatial resolution capabilitiesoffered by massive MIMO technology has the potential tosignificantly enhance the accuracy in wireless sensing appli-cations. . REFERENCES [1] J. Liu, H. Liu, Y. Chen, Y. Wang, and C. Wang, “Wire-less sensing for human activity: A survey,” IEEE Com-mun. Surveys Tuts. , vol. 22, no. 3, pp. 1629–1645, 3rdQuart. 2020.[2] W. Wang, A. X. Liu, M. Shahzad, K. Ling, and S. Lu,“Device-free human activity recognition using commer-cial WiFi devices,”
IEEE J. Sel. Areas in Commun. , vol.35, no. 5, pp. 1118–1131, May 2017.[3] C. Wu, Z. Yang, Z. Zhou, K. Qian, Y. Liu, and M. Liu,“Phaseu: Real-time LOS identification with WiFi,”in
Proc. IEEE International Conference on ComputerCommunications (INFOCOM) , Kowloon, Hong Kong,Apr. 26–May 1, 2015, pp. 2038–2046.[4] C. Huang, A. F. Molisch, R. He, R. Wang, P. Tang, B. Ai,and Z. Zhong, “Machine learning-enabled LOS/NLOSidentification for MIMO systems in dynamic environ-ments,”
IEEE Trans. Wireless Commun. , vol. 19, no. 6,pp. 3643–3657, Jun. 2020.[5] J. Xiao, K. Wu, Y. Yi, L. Wang, and L. M. Ni, “FIMD:Fine-grained device-free motion detection,” in
Proc.IEEE 18th International Conference on Parallel andDistributed Systems (ICPADS) , Singapore, Dec. 17–19,2012, pp. 229–235.[6] H. Zhu, F. Xiao, L. Sun, R. Wang, and P. Yang, “R-TTWD: Robust device-free through-the-wall detectionof moving human with WiFi,”
IEEE J. Sel. Areas inCommun. , vol. 35, no. 5, pp. 1090–1103, May 2017.[7] J. Zhang, B. Wei, W. Hu, and S. S. Kanhere, “WiFi-ID: Human identification using WiFi signal,” in
Proc.International Conference on Distributed Computing inSensor Systems (DCOSS) , Washington, DC, USA, May26–28, 2016, pp. 75–82.[8] J. Ding, Y. Wang, and X. Fu, “Wihi: WiFi based hu-man identity identification using deep learning,”
IEEEAccess , vol. 8, pp. 129246–129262, 2020.[9] L. Zhang, X. Ruan, and J. Wang, “WiVi: A ubiqui-tous violence detection system with commercial WiFidevices,”
IEEE Access , vol. 8, pp. 6662–6672, 2020.[10] M. Kotaru, K. Joshi, D. Bharadia, and S. Katti, “SpotFi:Decimeter level localization using WiFi,” in
Proc. ACMSIGCOMM Comput. Commun. Rev. , New York, USA,Aug. 2015, vol. 45, pp. 269–282.[11] M. Arnold, J. Hoydis, and S. t. Brink, “Novel massiveMIMO channel sounding data applied to deep learning-based indoor positioning,” in
Proc. 12th InternationalITG Conference on Systems, Communications and Cod-ing (SCC) , Feb. 11–14, 2019, pp. 119–124. [12] S. D. Bast, A. P. Guevara, and S. Pollin, “CSI-based po-sitioning in massive MIMO systems using convolutionalneural networks,” in
Proc. IEEE 91st Vehicular Technol-ogy Conference (VTC2020-Spring) , Antwerp, Belgium,May 25–28, 2020, pp. 1–5.[13] S. Riyaz, K. Sankhe, S. Ioannidis, and K. Chowdhury,“Deep learning convolutional neural networks for radioidentification,”
IEEE Commun. Mag. , vol. 56, no. 9, pp.146–152, Sep. 2018.[14] G. Inghelbrecht, R. Pintelon, and K. Barb´e, “Large-scale regression: A partition analysis of the least squaresmultisplitting,”
IEEE Trans. on Instrumentation andMeasurement , vol. 69, no. 6, pp. 2635–2647, Jun. 2020.[15] F. Pedregosa et al., “Scikit-learn: Machine learning inPython,”
Journal of Machine Learning Research , vol.12, pp. 2825–2830, 2011.[16] S. Malkowsky, J. Vieira, L. Liu, P. Harris, K. Nieman,N. Kundargi, I. C. Wong, F. Tufvesson, V. ¨Owall, andO. Edfors, “The world’s first real-time testbed for mas-sive MIMO: Design, implementation, and validation,”