Edge-Detect: Edge-centric Network Intrusion Detection using Deep Neural Network
EEdge-Detect: Edge-centric Network IntrusionDetection using Deep Neural Network
Praneet Singh (cid:63) , Jishnu Jaykumar P (cid:63) , Akhil Pankaj , Reshmi Mitra † Indian Institute of Science, Bengaluru, India Southeast Missouri State University, Cape Girardeau, USA { praneetsingh, jishnuj, akhilpankaj } @iisc.ac.in, [email protected] Abstract —Edge nodes are crucial for detection against multi-tudes of cyber attacks on Internet-of-Things endpoints and isset to become part of a multi-billion industry. The resourceconstraints in this novel network infrastructure tier constrictsthe deployment of existing Network Intrusion Detection Systemwith Deep Learning models (DLM). We address this issue bydeveloping a novel light, fast and accurate ‘Edge-Detect’ model,which detects Distributed Denial of Service attack on edge nodesusing DLM techniques. Our model can work within resourcerestrictions i.e. low power, memory and processing capabilities,to produce accurate results at a meaningful pace. It is builtby creating layers of Long Short-Term Memory or GatedRecurrent Unit based cells, which are known for their excellentrepresentation of sequential data. We designed a practical datascience pipeline with Recurring Neural Network to learn fromthe network packet behavior in order to identify whether itis normal or attack-oriented. The model evaluation is fromdeployment on actual edge node represented by Raspberry Piusing current cybersecurity dataset (UNSW2015). Our resultsdemonstrate that in comparison to conventional DLM techniques,our model maintains a high testing accuracy of ∼
99% evenwith lower resource utilization in terms of cpu and memory. Inaddition, it is nearly 3 times smaller in size than the state-of-artmodel and yet requires a much lower testing time.
Index Terms —Edge computing, Deep Learning, Internet ofThings, DDoS, Recurrent Neural Networks
I. I
NTRODUCTION
The Symantec 2019 Internet Security Threat Report [1] statesthat recently Internet of Things (IoT) has become a newinfection vector for cyber attacks such as Distributed Denialof Service (DDoS). However, attack detection is still an openand challenging problem because of dynamic, distributed,heterogeneous, and collaborative nature of the IoT devices.With their resource constraints, a growing body of workhave postulated that threat detection function is best suitedto be pushed to the edge nodes as first line of defense [2].However, the resource-intensive algorithms of Deep Learningmodels (DLM) are unsuitable for the newly emerging networkinfrastructure tier i.e. edge nodes.Standard security datasets such as KDD Cup 99 [3], MITLincoln Laboratory DARPA 2000 [4], CAIDA 2010 [5],and TUIDS DDoS dataset 2012 [6] are outdated with theadvancement of computer network protocols and equipment.According to best of our knowledge, a comprehensive dataset (cid:63)
Equal Contribution † Corresponding author: email: [email protected] for the IoT or edge computing paradigm is currently nonex-istent. From the state-of-art on attack detection using DLM,DeepDefense model [7] produced some of the best resultsin terms of prediction accuracy. Their efforts determined thatLong Short-Term Memory (LSTM) and Gated Recurrent Unit(GRU) are the most effective DLM for analyzing networkpackets using the UNB ISCX Intrusion Detection Evaluation2012 DataSet (now onwards ISCX2012 for short) [8].Motivated by these inadequacies, we have designed the
Edge-Detect model to enable DDoS detection on edge devicesusing light yet powerful and fast DLMs built by stackingthe FAST cells [9]. The term ‘light’ emphasizes the resourcerequirements and ‘fast’ denotes the processing performance.Edge-Detect is targeted towards the IoT security architectcommunity, since the intended purpose is to safeguard IoTendpoints. Fig. 1 brings to contrast the prior DDoS detectionpoint located on the cloud server, which suffers from crucialdetection latency. Being deployed at the edge node, our modelbecomes a faster path to examine the sequence of IoT networkpackets for potential attack. We chose UNSW2015 [10] datasetfor our evaluation, because it fulfilled all crucial criteria suchas relatively current compared to other dataset, clear separationbetween training and testing sets, and correct labels for thenetwork features.The input for our DLM pipeline is a sequence of individualpackets in the packet capture file, which are collated as win-dows with fixed length. This transformation is accompaniedwith reduced features and modification of attack label tosignify whether an attack occurred in that particular window.These window sequences are processed through DLM which,in essence, is a network of LSTM or GRU cells. Our resultsverify that our model can outperform the state-of-art onmultiple levels namely accuracy, precision, size (Kilobytes)and resource performance. To understand the model behaviourand deployment issues we are evaluating it on Raspberry Pi3 and observed that other regular processes on the edge nodeare not starved. It maintains a high testing accuracy of ∼ a r X i v : . [ c s . CR ] F e b ig. 1. Proposed location of Edge-Detect in comparison to the previous detection point in the cloud side. Transferring data from the point of attack up thehierarchy to cloud server causes crucial detection delays. Edge-Detect model (blue box) represents the edge node with added capabilities of our light andfast recurring neural network (RNN) module. The packet capture files from the IoT endpoints become the input, which are parsed and analyzed for detectionpurposes. This detection results can help drive the recommendation module for corrective actions. though several DLMs are available for IoT, limited detailsregarding model development, deployment and resource usageis available according to our literature review. Hence, it isvery difficult to reuse and apply their work by IoT securityarchitects with limited man hours or DLM skills. We areinvestigating practical design issues such as DLM networklayers and feature engineering to build a realistic model andpresented the validation results in this paper. In addition, ourmodel is available in the public repository [11] and can bedeployed by IoT security architect with minimal modificationor training.The main contributions of this paper are listed below:1) Design a light and fast Deep Learning based Edge-Detect model by stacking layers of LSTM and GRUcells for resource-constrained computers.2) Model validation on a standard network securityUNSW2015 dataset and conducted trade-off analysis onaccuracy performance, and cost.3) Highlight accuracy and system performance issues bydeploying our DLM on an actual edge node.This paper has four main sections. In Section II, we arepresenting the summary of state-of-art about DLM and cy-bersecurity analytics. Section III is about Edge-Detect modeldesign. Model evaluation and discussion is part of Section IV.We are concluding with the highlights of our work in Sec-tion V. II. R ELATED W ORK
The earliest works on statistical techniques for intrusiondetection in network packets [12] and [13] appeared abouttwo decades ago. The first comprehensive survey article [14]addressing the anomaly detection problem using machinelearning techniques, appeared about a decade ago and called it“Cyber-Intrusion Detection”. The techniques suggested in thatarticle includes Bayesian Networks, Neural Networks, SupportVector Machines (SVM), Clustering and Nearest Neighbor.Many early challenges in applying machine learning (ML)techniques for network intrusion detection system (NIDS)include understanding the threat model, keeping the model scope narrow and lack of training dataset among others asexplained in [15].Early survey article [16] to discuss specifically about the‘network’ intrusion detection within the anomaly detectionarea focused on the techniques, systems, datasets and toolsrelated to NIDS. They have categorized them in terms ofcapability, performance, dataset used, matching and detectionmechanisms, among others. A few key challenges presentedin the paper include run-time limitation, dependence on theenvironment, nature of the anomaly and lack of unbiaseddataset. Although DLM can solve some of these problems,but the need for a quality dataset which reflects the operating‘symptoms’ of an attack is still an unsolved issue.RNN model comparison with different ML methods suchas J48, artificial neural network, random forest, SVM for theNSL-KDD dataset [17] is shown in [18]. The benchmark is animproved version of the earlier KDD dataset [3]. They haveevaluated the impact on accuracy w.r.t. classification (binaryand multiclass), number of neurons and diverse learning ratereaching accuracy values of about 83%. We found out thattraditional ML or shallow neural networks are impractical forlarge network traffic data which is set to reach the ordersof zettabytes by 2021. In contrast, DLM eliminates the needfor domain expertise by obtaining abstract correlations andreduces human efforts in pre-processing.DeepDefense model [7] is the earliest prominent work touse RNN based DLM for this problem domain. Their solutionis based on RNN cells such as LSTM and GRU, becausethey have proved to be suitable with other sequential dataproblems such as speech recognition, language translation,speech synthesis among others. They have evaluated theirDLM on the UNSB ISCX Intrusion Detection Evaluation 2012dataset. We have advanced their work by building two differentlight and fast network models which are deployed on an actualedge node and achieving comparable accuracy margins on thelow resource platform. Although several prior work exist forearlier datasets, we are ignoring them here for multiple reasonssuch as: datset relevance, less accuracy w.r.t. current models,inability to reproduce results which renders it unsuitable.2ecent work [19] has proposed DLM for the cybersecurityin IoT networks. The models provided very good detection ac-curacy of 97.16%. In another excellent work [20], the authorshave developed ‘deep hierarchical network’ by cascading twotypes of networks (LeNet-5 and LSTM). They have appliedtheir model on the CICIDS2017 [21] dataset, achieving anaccuracy of about 90%. However, they claim that their modelcan automatically select temporal and spatial features fromexisting input traffic without providing substantial details,which is a hard problem even in the ML community.A prominent effort for social IoT [22] is using distributeddeep learning. Their model contains three hidden layers forfeature learning and soft-max regression (SMR) for the clas-sification task. In comparison to their distributed approachinvolving participating nodes exchanging multiple parameterswhich is computationally intensive, our focus is a centralizedapproach of maximizing the capabilities of each node. Therehave been other significant efforts in using DLM for attackdetection in allied fields such as Software Defined Networks,such as [23], [24] and [25]. However, we have limited ourfocus to IoT and edge devices for brevity.III. E
DGE DD O S D
ETECTION P IPELINE
Built using standard data science techniques, the main pipelinestages in Edge-Detect are: (1) pre-processing, (2) neural net-work model design, (3) training and optimization, and (4)deployment on edge device. These stages are illustrated inFig. 2. The pre-processing stage uses the network packetsfrom the UNSW2015 dataset to select significant featuresand convert individual packets into window sequences. Stage-2 involves DLM design using the provably fast and lightFAST cells. They are suitable in comparison to the standardRNN cells such as LSTM and GRU, due to the residual gateconnection [9]. Stage-3 in pipeline consists of training andoptimization of RNN model from Stage-2. Finally, the modelis tested for deployement on an actual Raspberry Pi nodeto detect packets undergoing DDoS attack from the normalpackets.
A. Stage 1: Pre-processing
UNSW2015 dataset is a network packet capture (pcap) CSVfile, which is correctly pre-labeled as normal or attack-oriented. We are reducing it into a series of windows withreduced features, due to the limited processing capabilities ofedge node. From the 49 features available in ICSX2012 (priordataset), [7] produced remarkable results by applying DLMon the 20 features. To determine whether further reductionin the number of features is even possible, we used the 11features of [27] as our reference. We are summarizing thisfeature selection process for the conciseness in this paper. Weperformed a standard pre-processing technique called “one hotencoding” on the feature termed as “state” in the dataset toreplace it by 15 additional features based on various categoryvalues.To build the comprehensive model about the network pat-terns in the entire dataset, it is important to ‘learn’ the charac-
Fig. 2. Edge DDoS detection pipeline consisting of these main steps: pre-processing, developing neural network model based on FAST cells, modeltraining, optimization and testing on an edge device. teristics from all preceding windows irrespective of the attackoccurrence. This issue is addressed with the sliding window approach, where each window is moved by a single packetto analyze whether the prior ( T -1) packets have led to anattack in the current packet. This is shown in Fig. 3. A singlewindow consisting of T packets with n features is reducedto n (cid:48) features with the binary label for the entire windowdepicting the occurrence of DDoS attack in the last packet.This means that applying DLM on this window is equivalent tolearning from the information of its ( T -1) constituent packetsto determine the attack occurrence in the T -th packet. The m packets produce a total of ( m − T + 1) windows due tothese sliding windows. Summarizing from the initial 49 ( = n ),3 ig. 3. Packet to window transformation: In the Stage 2, input data from UNSW2015 dataset is transformed into window sequences using sliding windowapproach for RNN model training. the transformation in this stage yields to 25 ( = n (cid:48) ) distinctfeatures. Overall, at the end of this stage from the initial sizeof ( m × n ) , the dataset at the end of this stage will have these ( m − T + 1) number of windows with each of them of thesize ( T × n (cid:48) ) . B. Stage 2: Edge-Detect Model Design
Our model is built using layers of FAST cells which areeither LSTM or GRU. In fact, GRU is a variant of LSTM.The advantage of using LSTM and GRU cells is that eachunit ‘remembers’ the existence of specific feature present inthe input stream, which makes them successful for sequentialapplications. These LSTM/GRU layers are followed by a denselayer of 128 cells and finally the output layer as shown inFig. 4. The activation functions used are ‘tanh’ for LSTM andGRU layers, ‘ReLU’ for dense layer and ‘sigmoid’ for outputlayer in all their models. We used the ReLU function as theactivation function of the hidden layers. This is a non-linearactivation function that can enhance the model performance byexpressing a complicated classification boundary better than alinear activation function.To signify whether it is associated with an attack, eachpacket is labeled in binary values of 1s and 0s in the (input)dataset. This identification is inferred from probability valueswhen applying the model. In our case, the output layer assignscertain probability values depending upon the weights learnedfrom the previous layer including up to the dense layer. Inorder to determine whether the packets are normal or attack-oriented, Edge-Detect model compares this probability with acertain threshold value. The results reported in the paper arewith current value set as 0.8. The output layer is labeled as“DDoS = > (1- p )” as shown in Fig. 4.IV. M ODEL E VALUATION
We are evaluating the standard metrics such as accuracy, loss,precision and recall for the UNSW2015 dataset on edge noderepresented by Raspberry Pi 3, as represented by Stage-3 and4 in Fig. 2. The goal is to identify the most suitable DLMwhich can meet accuracy as well resource usage criteria forthis newly emerging infrastructure tier.
Fig. 4. Edge-Detect Model architecture. The output layer assigns probabilities p and (1- p ) to the input packet window for being normal and maliciousrespectively. Every RNN layer and every fully connected layer are followedby a batch normalization layer to accelerate network trainingrepresents tanh, represents relu, represents sigmoid. A. Experimental Results
The system-on-chip in our Raspberry Pi is BroadcomBCM2837 with quad-core ARM Cortex-A53 operating atfrequency of 1.2GHz. In addition, it has a GPU of BroadcomVideoCore IV and RAM of 1GB LPDDR2 running at 900MHz. For training, we are using Google Cloud services, wherethe CPU configuration is Intel Haswell with 8 virtual CPUSwith 32 GB memory and GPU Tesla p100 with 100 GB HDD.We used Keras TensorFlow [26] to enhance the FAST cellsand, stack them in layers of LSTM and GRU for developingpowerful networks with better accuracy.We begin our investigation by regenerating the results ofDeepDefense model [7] using UNSW2015 dataset, since itwas based on the prior ISCX2012 dataset. Although ourstudy began with the DeepDefense models, it is crucial topoint out here that it is impossible to deploy them on theedge (Raspberry Pi) node. This is true even after featureengineering or scaling down the model by reducing number ofcells/layers. During our preliminary investigation, we observedthat these models are completely depleting the swap memoryon this resource-constrainted platform. The authors have also4
ABLE IP
ERFORMANCE EVALUATION OF E DGE -D ETECT RESULTS
Cell type Accuracy Loss Precision Recall
FastRNN 99.6% 4% 99.5% 99.75%FastGRNN 99.5% 2.4% 99.5% 99.55%TABLE IIC
OMPARING E DGE -D ETECT WITH THE CORRESPONDING D EEP D EFENSEMODELS
Category Cell type Weight Accuracy Layers Cells
DeepDefense LSTM 1684 KB 98% 4 64DeepDefense GRU 1314 KB 98% 4 64Edge-Detect FastRNN 598 KB 99% 1 128Edge-Detect FastGRNN 609 KB 99% 1 128 concurred that building a light-weight model was not theirintended purpose.Our model evaluation results for the key performancemetrics namely accuracy, loss, precision and recall on theRaspberry Pi is summarized in Table I. The comparison withthe state-of-art is shown in Table II. In contrast to the highmemory requirement (third column) of DeepDefense model,we have achieved a size reduction of 66% with slightly betteraccuracy (fourth column). The weight drop is due to theadequacy of single layer in our model, whereas four layersrequirement in their model. This is crucial for the platformresource restrictions, since it is impossible to accommodatelarge number of computations to achieve reasonable accuracy.This table also shows the cell types used for each model inthe second column. Table III presents the AUC, Kappa andF1 score for our model results as represented by the last tworows of Table II.
B. Resource Statistics
Motivated with our accuracy results, we performed a deeperexploration to identify the most suitable and practical DLMfor DDoS detection on the edge node. This second partof evaluation involves understanding the hardware executionparameters and trends. Using ‘top’ (Linux utility), we aremeasuring the average utilization of cpu per core, residentand virtual memory in order to monitor the processor, swapmemory and RAM while DLM is executing on the RaspberryPi. We are using Linux ‘time’ command to gauge the modeltesting time. These evaluations shown in Fig. 5 demonstratethat comprehensively FastGRNN is slightly better suited forour cyber-defense application. Our experiments also revealedthat Edge-Detect provides enough contigency for concurrentexecution for other processes on the Raspberry Pi. We have
TABLE IIIR
ESULTS FOR E DGE -D ETECT MODELS BUILT USING CELL TYPES :F AST
RNN
AND F AST
GRNN. B
OTH THE MODEL INSTANCES ARE SINGLELAYER WITH
NEURONS . AUC KAPPA F1SCORE done preliminary experimentation in this direction, howeverdetailed investigation is beyond the scope of this paper.
C. Discussion
The first main concept emerging out of the evaluation is thatEdge-Detect has similar accuracy results ( ∼ Secondly ,windows with packets having fewer features is achievingcomparable accuracy w.r.t. DeepDefense models on the newerdataset. By doing so, we reduce the number of requiredcomputations on the edge node and improve training time. Inaddition, the experiments verify that testing accuracy increasesafter training the models with a selected feature dataset. This issustainable for the edge node, because loading windows withall the features is resource-intensive and hence, impracticalfor the Raspberry Pi.
Thirdly , high number of layers doesnot necessarily translate into better accuracy. For example,64 neurons can produce similar accuracy and precision in asingle layer as compared to four layers.
Overall , character-istics of Edge-Detect defies important processor and memoryutilization concepts of prior DLM which has made it bulky forresource-constrainted platform. Our work also evolved towards5he importance of feature engineering, which involves carefulhyper-parameter tuning of learning rate, decay rate, batch size,among others. However, this is not the main objective of thepaper and is beyond the current scope.V. C
ONCLUSION
Deploying cyber-defense solutions based on standard NIDStechniques for IoT endpoints on network edge is a topicof immense current interest among academic and industryresearchers. To analyze the exploding volumes of multi-modalnetwork packet data, the most prevalent technique is anomalydetection based on ML/DLM. An appropriate deployment iscloser to the IoT attack surface i.e. edge nodes. However,their minimal resources has imbalanced the trade-offs betweenprediction cost, deployment speed and accuracy. In order toovercome such limitations of the existing models, it is criticalto develop new designs for resource-constrainted edge com-puting. In this paper, we are proposing Edge-Detect: a lightand efficient DLM-enabled DDoS detection with edge nodesas deployment point. Built using temporally sensitive neuralnetworks such as LSTM and GRU, it learns from the networkpacket behavior to identify whether it is normal or attack-oriented. With minimal number of layers and light FASTcells, it can work within resource restriction to produce veryaccurate results with minimum training cost. We designed apractical data science pipeline based on RNN layers, validatedit on a recent bulky UNSW2015 dataset and showed success-ful deployement. The investigation results demonstrate thatin comparison to conventional DLM techniques, our modelmaintains a high testing accuracy of ∼ CKNOWLEDGEMENT ∗ Aditya Kusupati for his valuable suggestions. ∗ Staff at IISc for providing us with their GPU and desktopto conduct our initial experiments.R
EFERENCES[1] “Symantec Internet Security Threat Report” vol. 24, February 2019[Online]. Available https://docs.broadcom.com/doc/istr-24-2019-en [Ac-cessed on July 21, 2020][2] K. Bhardwaj, J. C. Miranda, and A. Gavrilovska, “Towards IoT-DDoSprevention using edge computing,” in
USENIX Workshop on Hot Topicsin Edge Computing (HotEdge) , 2018.[3] S.J. Stolfo, W. Fan, W. Lee, A. Prodromidis, and P.K. Chan, “Cost-BasedModeling for Fraud and Intrusion Detection: Results from the JAMProject,” in
DARPA Information Survivability Conference and Exposition(DISCEX’00) , pp. 130–144, 2000.[4] R. Lippmann, J. W. Haines, D. J. Fried, J. Korba, and K. Das, “The1999 DARPA off-line intrusion detection evaluation,”
Elsevier Computernetworks , vol. 34, pp. 579–595, 2000. [5] Center for Applied Internet Data Analysis, UC San Diego, “DDoSAttack 2007 dataset,” 2010.[6] P. Gogoi, M.H. Bhuyan, D.K. Bhattacharyya, and J.K. Kalita, “Packetand flow based network intrusion dataset,” in
Springer InternationalConference on Contemporary Computing , pp. 322–334, 2012.[7] X. Yuan, C. Li, and X. Li, “DeepDefense: Identifying DDoS Attack viaDeep Learning,” in
IEEE International Conference on Smart Computing(SMARTCOMP) , pp. 1-8, 2017.[8] A. Shiravi, H. Shiravi, M. Tavallaee, and A.A. Ghorbani, “TowardDeveloping a Systematic Approach to Generate Benchmark Datasetsfor Intrusion Detection,” in
Elsevier Computers & Security , vol. 31, pp.357-374, 2012.[9] A. Kusupati, M. Singh, K. Bhatia, A. Kumar, P. Jain, and M. Varma,Manik, “FASTgrnn: A Fast, Accurate, Stable and Tiny Kilobyte sizedGated Recurrent Neural Network,” in
Advances in Neural InformationProcessing Systems , pp. 9017-9028, 2018.[10] N. Moustafa, and J. Slay, “UNSW-NB15: A Comprehensive Data Setfor Network Intrusion Detection Systems,” in
IEEE Military Communi-cations and Information Systems Conference (MilCIS) , pp. 1-6, 2015.[11] Edge-detect GitHub https://github.com/racsa-lab/EDD [Accessed on July21, 2020].[12] D. Anderson, T.F. Lunt, H. Javitz, A. Tamaru, and A. Valdes, “DetectingUnusual Program Behavior Using the Statistical Component of the Next-generation Intrusion Detection Expert Systems (NIDES),” Tech. Re-port SRI-CSL-95-06, SRI International. Computer Science Laboratory,Menlo Park, California, 1995.[13] J. M. Bonif´acio, A. M. Cansian, A. C. P. L. F. De Carvalho, and E. S.Moreira, “Neural Networks Applied in Intrusion Detection System,” in
IEEE World Congress on Computational Intelligence (WCCI) pp. 205-210, 1998.[14] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly Detection: ASurvey,” in
ACM Computing Surveys (CSUR) , vol. 41, 2009.[15] R. Sommer and V. Paxson, “Outside the Closed World: On Using Ma-chine Learning For Network Intrusion Detection,” in
IEEE Symposiumon Security and Privacy , pp. 305–316, 2010.[16] M.H. Bhuyan, D.K. Bhattacharyya, and J.K. Kalita, “Network anomalydetection: Methods, systems and tools,”
IEEE Communications Surveys& Tutorials , vol. 16, pp. 303–336, 2013.[17] M. Tavallaee, E. Bagheri, W. Lu, and A. Ghorbani, “A Detailed Analysisof the KDD CUP 99 Data set,” in
IEEE Symposium on ComputationalIntelligence for Security and Defence Applications , 2009.[18] C.L. Yin, Y.F. Zhu, J.L. Fei, and X.Z. He, “A Deep Learning Approachfor Intrusion Detection Using Recurrent Neural Networks,” in
IEEEAccess , vol. 5, pp. 21954–21961, 2017.[19] M. Roopak, G.Y. Tian and J. Chambers, “Deep Learning Models forCyber Security in IoT Networks,” in
IEEE Annual Computing andCommunication Workshop and Conference (CCWC) , pp. 452-457, 2019.[20] C. Zhang, P. Patras, and H. Haddadi, “Deep Learning in Mobile andWireless Networking: A Survey,” in
IEEE Communications Surveys &Tutorials , 2019.[21] I. Sharafaldin, A.H. Lashkari and A.A. Ghorbani, “Toward Generating aNew Intrusion Detection Dataset and Intrusion Traffic Characterization,”in
International Conference in Information System Security Privacy(ICISSP) , pp. 108–116, 2018.[22] A.A. Diro and N. Chilamkurt,“Distributed Attack Detection Schemeusing Deep Learning Approach for Internet of Things,” in
ElsevierFuture Generation Computer Systems , vol. 82, pp.761-768, 2017.[23] Q. Niyaz, W. Sun, and A. Y. Javaid, A Deep Learning Based DDoSDetection System in Software-Defined Networking (SDN),” , Submittedto EAI Endorsed Transactions on Security and Safety, In Press, 2017,[Online]. Available: http://arxiv.org/abs/1611.07400 [Accessed on July21, 2020][24] C. Li, Y. Wu, X. Yuan, Z. Sun, W. Wang, X. Li, et al., “Detection andDefense of DDoS Attack based on Deep Learning in Open Flow basedSDN,” in
Journal of Communication Systems , vol. 31, 2018.[25] M. Al-Qatf, Y. Lasheng, M. Al-Habib and K. Al-Sabahi, “Deep LearningApproach Combining Sparse Autoencoder With SVM for NetworkIntrusion Detection,” in
IEEE Access , vol. 6, pp. 52843-52856, 2018.[26] M. Abadi, A. Agarwal, et al, “TensorFlow: Large-scale MachineLearning on Heterogeneous Systems,” 2015, Software available fromhttp:tensorflow.org [Accessed on July 21, 2020].[27] N. Moustafa and J. Slay, “A Hybrid Feature Selection for NetworkIntrusion Detection Systems: Central points, ” 2017, [online] Available:https://arxiv.org/abs/1707.05505 [Accessed on July 21, 2020]., vol. 6, pp. 52843-52856, 2018.[26] M. Abadi, A. Agarwal, et al, “TensorFlow: Large-scale MachineLearning on Heterogeneous Systems,” 2015, Software available fromhttp:tensorflow.org [Accessed on July 21, 2020].[27] N. Moustafa and J. Slay, “A Hybrid Feature Selection for NetworkIntrusion Detection Systems: Central points, ” 2017, [online] Available:https://arxiv.org/abs/1707.05505 [Accessed on July 21, 2020].