Adversarial Machine Learning for 5G Communications Security
11 Adversarial Machine Learning for 5GCommunications Security
Yalin E. Sagduyu, Tugba Erpek, and Yi Shi
Abstract —Machine learning provides automated means tocapture complex dynamics of wireless spectrum and supportbetter understanding of spectrum resources and their efficientutilization. As communication systems become smarter withcognitive radio capabilities empowered by machine learning toperform critical tasks such as spectrum awareness and spectrumsharing, they also become susceptible to new vulnerabilities dueto the attacks that target the machine learning applications.This paper identifies the emerging attack surface of adversarialmachine learning and corresponding attacks launched againstwireless communications in the context of 5G systems. The focusis on attacks against (i) spectrum sharing of 5G communicationswith incumbent users such as in the Citizens Broadband RadioService (CBRS) band and (ii) physical layer authentication of5G User Equipment (UE) to support network slicing. For thefirst attack, the adversary transmits during data transmission orspectrum sensing periods to manipulate the signal-level inputs tothe deep learning classifier that is deployed at the EnvironmentalSensing Capability (ESC) to support the 5G system. For thesecond attack, the adversary spoofs wireless signals with the gen-erative adversarial network (GAN) to infiltrate the physical layerauthentication mechanism based on a deep learning classifierthat is deployed at the 5G base station. Results indicate majorvulnerabilities of 5G systems to adversarial machine learning. Tosustain the 5G system operations in the presence of adversaries,a defense mechanism is presented to increase the uncertainty ofthe adversary in training the surrogate model used for launchingits subsequent attacks.
Index Terms —Adversarial machine learning, deep learning,5G, spectrum sharing, authentication, network slicing, GAN,jamming, spoofing
I. I
NTRODUCTION
As the fifth generation mobile communications technology,5G supports emerging applications such as smart warehouses,vehicular networks, virtual reality (VR) and augmented reality(AR) with unprecedented rates enabled by recent advances inmassive MIMO, mmWave communications, network slicing,small cells, and internet of things (IoT). Complex structures ofwaveforms, channels, and resources in 5G cannot be reliablycaptured by simplified analytical models driven by expertknowledge. As a data-driven approach, machine learning hasemerged as a viable alternative to support 5G communicationsby learning from and adapting to the underlying spectrumdynamics [1]. Empowered by recent advances in algorithmic
Y. E. Sagduyu, T. Erpek and Y. Shi are with Intelligent Automation,Inc., Rockville, MD, USA; T. Erpek and Y. Shi are also with VirginiaTech, Arlington, VA, USA and Blacksburg, VA, USA, respectively. Email: { ysagduyu, terpek, yshi } @i-a-i.com.This effort is supported by the U.S. Army Research Office under contractW911NF-17-C-0090. The content of the information does not necessarilyreflect the position or the policy of the U.S. Government, and no officialendorsement should be inferred. design and computational hardware resources, deep learning shows strong potential to learn the high-dimensional data char-acteristics of wireless communications beyond conventionalmachine learning techniques [2] and offers novel solutions tocritical tasks of detection, classification, and prediction in 5Gsystems.As machine learning becomes a core part of next-generationcommunication systems, there is an increasing concern aboutthe vulnerability of machine learning to adversarial effects. Tothat end, smart adversaries may leverage emerging machinelearning techniques to infer vulnerabilities in 5G systems andtamper with the learning process embedded in 5G communica-tions. The problem of learning in the presence of adversariesis the subject to the study of adversarial machine learning that has received increasing attention in computer vision andnatural language processing (NLP) domains [3]–[5]. Due tothe shared and open nature of wireless medium, wireless appli-cations are highly susceptible to adversaries such as jammersand eavesdroppers that can manipulate the training and testingprocesses of machine learning over the air. While there is agrowing interest in designing attacks on machine learning-driven data and control planes of wireless communications[6]–[9], adversarial machine learning has not been consideredyet for sophisticated communication systems such as 5G.5G systems are designed to operate in frequency bandsfrom 450 MHz to 6 GHz, and 24.250 GHz to 52.600 GHz(millimeter-wave bands) including the unlicensed spectrum.While some of these bands are dedicated to commercial useof 5G, some other ones are opened for spectrum co-existence of 5G with other legacy wireless systems. In particular, theU.S. Federal Communications Commission (FCC) has adoptedrules for the Citizens Broadband Radio Service (CBRS) band to allow the use of commercial communications systems in the3550-3700 MHz band in an opportunistic manner by treatingCBRS users as incumbent. These communications systemsincluding 5G systems are required to vacate the band oncethe ranging (radar) signal is detected by the EnvironmentalSensing Capability (ESC) system as an incumbent CBRS user[10]. Radar signal detection and classification is a complexproblem considering the unpredictable utilization patterns,channel effects and interference from other commercial sys-tems operating in the same band and interference leakagefrom radar systems operating in different bands. Deep neuralnetworks can capture complex spectrum effects in these bandsand perform superior performance compared to conventionalsignal detection and classification techniques [11], [12]. Onthe other hand, the use of deep neural networks may exposethe system to adversarial attacks, as adversaries can tamper a r X i v : . [ c s . N I] J a n with both data and control plane communications with smartjamming and consequently prevent efficient spectrum sharingof 5G with incumbent users.Another aspect of 5G that is vulnerable to adversarialmachine learning is network slicing . As an emerging conceptthat enables 5G to serve diverse applications (such as IoT andautonomous driving) with different performance requirements(such as throughput and latency) on heterogeneous platforms,network slicing multiplexes virtualized and independent log-ical networks on a common physical network infrastructure[13]. Our focus is on network slicing in the 5G radio accessnetwork (RAN) [14]. In this setting, both the network slicemanager and the user equipment (UE) cannot be trusted ingeneral [15] and smart adversaries may impersonate theirroles. As part of physical layer security, deep learning can beused by 5G for RF fingerprinting to classify and authenticatesignals received at network slice manager or host machines.However, by exploring the vulnerabilities of physical layersignal authentication , adversarial machine learning providesnovel techniques to spoof wireless signals that cannot bereliably distinguished from intended users even when deepneural networks are utilized for wireless signal classification.Compared to other data domains, there are several uniquechallenges when we apply adversarial attacks in wirelesscommunications.1) In data domains such as computer vision and NLP, itmay be possible that the input to the machine learningalgorithm (e.g., a target classifier) is directly manip-ulated by an adversary, e.g., by querying an onlineapplication programming interface (API). However, anadversary in the wireless domain cannot directly collectthe same input data (e.g., spectrum sensing data) as thecommunication system’s transmitter, due to the differentchannel and interference conditions perceived by thetarget model and the adversary. As a result, the features(input) to the machine learning algorithm are differentfor the same instance.2) The adversary in a wireless domain cannot directlyobtain the output (label) of the target machine learningalgorithm, since the output is used by the target systemsuch as 5G only and thus is not available to any otherwireless node outside the network. As a result, theadversary needs to observe the spectrum to make senseof the outputs of the target machine learning algorithmbut cannot necessarily collect exactly the same outputs(e.g., classification labels).3) The adversary in a wireless domain cannot directlymanipulate the input data to a machine learning algo-rithm. Instead, it can only add its own transmissions ontop of existing transmissions (if any) over the air (i.e.,through channel effects) to change the input data (suchas spectrum sensing data) indirectly.4) Input features of the machine learning algorithm to beused by the adversary may differ from those used by thetarget communication system depending on the differ-ences in the underlying waveform and receiver hardwarecharacteristics at the adversary and the communicationsystem. By accounting for these challenges, we will describe how toapply adversarial machine learning to the 5G communicationsetting. In particular, we will discuss the vulnerabilities ofmachine learning applications in 5G with two motivatingexamples and show how adversarial machine learning providesa new attack surface in 5G communication systems:1) Attack on spectrum sharing of 5G with incumbent userssuch as in CBRS : The ESC system senses the spectrumand uses a deep learning model to detect the radarsignals as an incumbent user. If the incumbent user isnot detected in a channel of interest, the 5G transmitter,in our case the 5G base station, namely the 5G gNodeB,is informed and starts with communications to the 5GUE. Otherwise, the 5G gNodeB cannot use this channeland the Spectrum Access system (SAS) reconfiguresthe 5G system’s spectrum access (such as vacatingthis particular channel) to avoid interference with theincumbent signals. By monitoring spectrum dynamicsof channel access, the adversary builds a surrogatemodel of the deep neural network architecture basedon its sensing results, predicts when a successful 5Gtransmission will occur, and jams the communicationsignal accordingly in these predicted time instances. Weconsider jamming both data transmission and spectrumsensing periods. The latter case is stealthier (i.e., moredifficult to detect) and more energy efficient as it onlyinvolves jamming of the shorter period of spectrumsensing before data transmission starts and forces the5G gNodeB into making wrong transmit decisions. Weshow that this attack reduces the 5G communicationthroughput significantly.2)
Attack on network slicing : The adversary transmitsspoofing signals that mimic the signal characteristicsof the 5G UE when requesting a network slice fromthe host machine. This attack potentially allows theadversary to starve resources in network slices afterinfiltrating through the authentication system built at the5G gNodeB. A generative adversarial network (GAN)[16] is deployed at an adversary pair of a transmitter anda receiver that implement the generator and discrimina-tor, respectively, to generate synthetic wireless signalsthat match the characteristics of a legitimate UE’s 5Gsignals that the 5G gNodeB would expect to receive. Weshow that this attack allows an adversary to infiltrate thephysical layer authentication of 5G with high successrate.These novel attacks leave a smaller footprint and are moreenergy-efficient compared to conventional attacks such asjamming of data transmissions. As a countermeasure, wepresent a proactive defense approach to reduce the perfor-mance of the inference attack that is launched as the initialstep to build an adversarial (surrogate) model that captureswireless communication characteristics. Other attacks are builtupon this model. The 5G system as the defender makes theadversarial model less accurate by deliberately making a smallnumber of wrong decisions (such as in spectrum sensing, datatransmission, or signal authentication). The defense carefully selects which decisions to flip with the goal of maximizingthe uncertainty of the adversary while balancing the impact ofthese controlled decision errors on its own performance.This rest of the paper is organized as follows. We introduceadversarial machine learning and describe the correspondingattacks in Section II. Then, we extend the use of adversarialmachine learning to the wireless communications and discussthe domain-specific challenges in Section III. After identifyingkey vulnerabilities of machine learning-empowered 5G solu-tions, Section IV introduces two attacks built upon adversarialmachine learning against 5G communications and presents adefense mechanism. Section V concludes the paper.II. A
DVERSARIAL M ACHINE L EARNING
While there is a growing interest in applying machine learn-ing to different data domains and deploying machine learningalgorithms in real systems, it has become imperative to un-derstand vulnerabilities of machine learning in the presenceof adversaries. To that end, adversarial machine learning [3]–[5] has emerged as a critical field to enable safe adoption ofmachine learning subject to adversarial effects. One examplethat has attracted recent attention involves machine learningapplications offered to public or paid subscribers via APIs;e.g., Google Cloud Vision [17] provides cloud-based machinelearning tools to build machine learning models. This onlineservice paradigm creates security concerns of adversarial in-puts to different machine learning algorithms ranging fromcomputer vision to NLP [18], [19]. As another applicationdomain, automatic speech recognition and voice controllablesystems were studied in terms of the vulnerabilities of theirunderlying machine learning algorithms [20], [21]. As an effortto identify vulnerabilities in autonomous driving, attacks onself-driving vehicles were demonstrated in [22], where theadversary manipulated traffic signs to confuse the learningmodel.The manipulation in adversarial machine learning may hap-pen during the training or inference (test) time, or both. Duringthe training time, the goal of the adversary is to provide wronginputs (features and/or labels) to the training data such that themachine learning algorithm is not properly trained. During thetest time, the goal of the adversary is to provide wrong inputs(features) to the machine algorithm such that it returns wrongoutputs. As illustrated in Fig. 1, attacks built upon adversarialmachine learning can be categorized as follows.1)
Attack during the test time. a) Inference (exploratory) attack : The adversary aimsto infer the machine learning architecture of thetarget system to build a shadow or surrogate modelthat has the same functionality as the originalmachine learning architecture [5], [23]–[27]. Thiscorresponds to a white-box or black-box attackdepending on whether the machine learning modelsuch as the deep neural network structure is avail-able to the adversary, or not. For a black-box at-tack, the adversary queries the target classifier witha number of samples and records the labels. Then,it uses this labeled data as its own training data
Test Data
Deep Learning Algorithm
Training Data
OutputLabels
Adversary
Evasion Attack
Poisoning (Causative)
Attack
Exploratory Attack
Data
Inference
ML Model
Learned Model
QPSK 16QAMQPSK 16QAMQPSK
Trojan Attack
Membership Inference AttackSpoofing Attack
Fig. 1. Taxonomy of attacks built upon adversarial machine learning. to train a functionally-equivalent (i.e., statisticallysimilar) deep learning classifier, namely a surrogatemodel. Once the machine learning functionality islearned, the adversary can use the inference resultsobtained from the surrogate model for subsequentattacks such as confidence reduction or targetedmisclassification.b)
Membership inference attack : The adversary aimsto determine if a given data sample is a memberof the training data, i.e., if a given data sample hasbeen used to train the machine learning algorithmof interest [28]–[31]. Membership inference attackis based on the analysis of overfitting to checkwhether a machine learning algorithm is trainedfor a particular data type, e.g., a particular typeof images. By knowing which type of data themachine learning algorithm is trained to classify,the adversary can then design a subsequent attackmore successfully.c)
Evasion attack : The adversary manipulates testdata of a machine learning algorithm by addingcarefully crafted adversarial perturbations [32]–[35]. Then, the machine learning model runs onerroneous test data and makes errors in test time,e.g., a classifier is fooled into accepting an adver-sary as legitimate. The samples with output labelsthat are closer to the decision region can be usedby the adversary to increase the probability of errorat the target machine learning algorithm.d)
Spoofing attack : The adversary generates syntheticdata samples from scratch rather than adding per-turbations to the real ones. The GAN can be usedfor data spoofing by generating a synthetic datasetthat is statistically similar to the original dataset[36]–[38]. The GAN consists of two deep neuralnetworks, one acting as a generator and the otherone acting as a discriminator. The generator gen-erates spoofing signals and the discriminator aimsto detect whether the received signal is spoofed,or not. Then, the generator and the discriminator play a mini-max game to optimize their individualperformance iteratively in response to each other’sactions. After they converge, the generator’s deepneural network is used to generate spoofing signals.2)
Attack during the training time.Poisoning (causative) attack : The adversary manipulatesthe training process by either directly providing wrongtraining data or injecting perturbations to the trainingdata such that the machine learning model is trainedwith erroneous features and thus it makes errors later intest time [32], [39], [40]. This attack is stealthier thanthe evasion attack (as the training period is typicallyshorter than the test period). To select which trainingdata samples to tamper with, the adversary first runssamples through the inferred surrogate model and thenchanges their labels and sends these mislabeled samplesas training data to the target classifier provided that theirdeep learning scores are far away from the decisionregion of the surrogate model.3)
Attack during both training and test times.Trojan (backdoor or trapdoor) attack : The adversaryslightly manipulates the training data by inserting Tro-jans, i.e., triggers, to only few training data samplesby modifying some data characteristics (e.g., puttingstickers on traffic signs) and changing the labels of thesesamples to a target label (e.g., from the stop sign tothe speed limit sign). This poisoned training data maybe used to train the machine learning model. In testtime, the adversary feeds the target classifier with inputsamples embedded with the same characteristics thatwere added as triggers by the adversary during training.The goal of the adversary is to cause errors whenmachine learning is run on data samples poisoned withtriggers. In the meantime, clean (unpoisoned) sampleswithout triggers should be processed correctly. Sinceonly few samples of Trojans are inserted, this attack isharder to detect than both evasion and causative attacks.The disadvantage of the Trojan attack is that it needs tobe launched during both training and test times, i.e., theadversary needs to have access to and manipulate bothtraining and test data samples.Various defense mechanisms have been developed in theliterature against adversarial machine learning attacks in com-puter vision, NLP, and other data domains. The core of thedefense is to make the machine learning algorithm robust tothe anticipated attacks. One approach against evasion attacksis randomized smoothing during training, where a number ofsmall Gaussian noise samples are added to each training datasample to augment the training dataset. Then, the classifierthat is trained with this augmented training dataset becomesrobust against adversarial inputs in test time [41]. The defensecan be further certified by adding perturbations to training dataand generating a certificate to bound the expected error dueto perturbations added later in test time [41]. Note that thisdefense assumes that the attack provides erroneous input byadding perturbations to test data. We consider a proactive de-fense mechanism in Section III-C against adversarial machine learning in wireless communications. The goal of this defenseis to provide a small number of carefully crafted wrong inputsto the adversary, as it builds (trains) its attack scheme, and thusprevent the adversary from building a high-fidelity surrogatemodel. III. A
DVERSARIAL M ACHINE L EARNING IN W IRELESS C OMMUNICATIONS
Machine learning finds rich applications in wireless commu-nications, including spectrum access [42], signal classification[43], beamforming [44], beam selection [45], channel estima-tion [46], channel decoding [47], physical layer authentication[48], and transmitter-receiver scheduling [49]. In the mean-time, there is a growing interest in bridging machine learningand wireless security in the context of adversarial machinelearning [6], [7]. In Section III-A, we discuss how differentattacks in adversarial machine learning presented in SectionII can be adapted to the wireless communications. Then, weidentify the domain-specific challenges on applying adversarialmachine learning to wireless communications in Section III-B.Finally, we discuss the state-of-the-art defense techniquesagainst the adversarial attacks in wireless communications inSection III-C.
A. Wireless Attacks built upon Adversarial Machine Learning
Fig. 2 illustrates target tasks, attack types, and attack pointsof adversarial machine learning when applied to the wirelesscommunications. As the motivating scenario to describe differ-ent attacks, we consider a canonical wireless communicationsystem with one transmitter 𝑇 , one receiver 𝑅 , and one adver-sary 𝐴 . This setting is instrumental in studying conventionaljamming [50]–[52] and defense strategies in wireless access,and can be easily extended to a network scenario with multipletransmitters and receivers [6]. The communication systemshares the spectrum with a background traffic source 𝐵 , whosetransmission behavior is not known by 𝑇 and 𝐴 .In wireless domains, the following attacks have been con-sidered against a machine learning-based classifier. • Exploratory (inference) attack . Transmitter 𝑇 uses a ma-chine learning-based classifier 𝐶 𝑇 for a specific tasksuch as spectrum sensing and adversary 𝐴 aims to builda classifier 𝐶 𝐴 (namely, the surrogate model) that isfunctionally the same as (or similar to) the target classifier 𝐶 𝑇 [2], [53]. The observations and labels at 𝐴 will differcompared to the ones at 𝐶 𝑇 since 𝑇 and 𝐴 are not co-located and they experience different channels. • Membership inference attack . Adversary 𝐴 aims to deter-mine whether a given data sample is in the training dataof 𝐶 𝑇 . One example of this attack in the wireless domainis to identify whether a target classifier is trained againstsignals from a particular transmitter, or not [54]. • Evasion attack . Adversary 𝐴 aims to determine or craftinput samples that the target classifier 𝐶 𝑇 cannot reliablyclassify. Wireless examples of evasion attacks includemanipulation of inputs to the spectrum sensing [7], [55],[56], modulation classification [57]–[67], channel access Target Tasks • Spectrum sharing • Situational awareness • Anti-jamming • Network slicing • Resource allocation • Access control
Attack Type Test
Time
Training
Time
Wireless
Mechanism
Exploratory (inference) attack + −
Eavesdrop
Membership inference attack + −
Jam, Eavesdrop
Evasion attack + −
JamPoisoning (causative) attack − +
Jam
Spoofing attack + −
Waveform design
Trojan attack + +
Waveform design
Attack Points • Spectrum sensing • Modulation classification • Adaptive modulation/coding • Channel estimation • Beam selection • PHY authentication
Adversarial Machine Learning in Wireless Communications
Fig. 2. Key points of adversarial machine learning in wireless communications. [68], [69], autoencoder-based physical layer design, [70],and eavesdroppers for private communications [71]. • Poisoning (Causative) attack . Adversary 𝐴 provides falsi-fied samples to train or retrain the target classifier 𝐶 𝑇 suchthat 𝐶 𝑇 is not properly trained and makes significantlymore errors than usual in test time. Poisoning attackswere considered against spectrum sensing at an individualwireless receiver in [7], [56] and against cooperativespectrum sensing at multiple wireless receivers in [72],[73]. Note that these poisoning attacks against sensingdecisions are extensions of the conventional spectrumsensing data falsification (SSDF) attacks [74] to theadversarial machine learning domain but they are moreeffective as they manipulate the training process moresystematically for the case of machine learning-basedspectrum sensors. • Spoofing attack : Adversary 𝐴 generates spoofing signalsthat impersonate transmissions originated from 𝑇 . TheGAN can be used to generate synthetic forms of wirelesssignals that can be used not only to augment training data(e.g., for a spectrum sensing classifier [38]) but also tofool a target classifier [36], [37]. To spoof wireless signalsin a distributed setting, two adversaries (one transmitterand one receiver) assume roles of a generator and a dis-criminator of the GAN to spoof and discriminate signals,respectively, with two separate deep neural networks. • Trojan attack . Adversary 𝐴 provides falsified samples totrain the target classifier 𝐶 𝑇 such that 𝐶 𝑇 works well ingeneral but provides incorrect results if a certain trigger isactivated. For example, small phase shifts can be added astriggers to wireless signals (without changing the signalamplitude) to launch a stealthy Trojan attack against awireless signals classifier [75]. B. Domain-specific Challenges for Adversarial MachineLearning in Wireless Communications
Wireless applications of adversarial machine learning aredifferent from other data domains such as computer vision infour main aspects.1) The adversary and the defender may not share thesame features (such as received signals) as channels andinterference effects observed by them are different.2) The adversary and the defender may not share the samelabels (i.e., machine learning outputs). For example, thedefender may aim to classify channel as busy or not during spectrum sensing, whereas the adversary mayneed to decide on whether the defender will have asuccessful transmission or not. These two objectives maydiffer due to different channel and interference effectsobserved by the adversary and the defender.3) The adversary may not directly manipulate the inputdata to the machine learning algorithm, as wirelessusers are typically separated in location and receivetheir input from wireless signals transmitted over the air.Therefore, it is essential to account for channel effectswhen designing wireless attacks and quantifying theirimpact.4) The type of data observed by the adversary depends onthe underlying waveform and receiver hardware of theadversary. While adversarial machine learning may runon raw data samples such as pixels in computer visionapplications, the adversary in a wireless domain mayhave to use different types of available radio-specificfeatures such as I/Q data and received signal strengthindicator (RSSI) that may differ from the features usedby the target machine learning system.With the consideration of the above challenges, we canapply adversarial machine learning to design attacks on wire-less communications. In an example of exploratory attack, 𝐴 collects spectrum sensing data and obtains features for its ownclassifier 𝐶 𝐴 . Unlike the traditional exploratory attack, where 𝐶 𝐴 provides the same set of prediction results as the targetclassifier 𝐶 𝑇 , the label that 𝐴 aims to predict is whether thereis a successful transmission from 𝑇 or not, which can becollected by sensing the acknowledgement message (ACK).For that purpose, 𝐴 collects both input (features) and output(label) for 𝐶 𝐴 . If for an instance, 𝐴 can successfully predictwhether there will be an ACK, 𝐶 𝐴 can predict the correctresult. Deep learning is successful in building the necessaryclassifier 𝐶 𝐴 for 𝐴 in exploratory attacks [6].We can further design an example of evasion attack asfollows. If 𝐴 predicts a successful transmission, it can eithertransmit in the sensing phase (to change features to 𝐶 𝑇 ) [7],[55] or in the data transmission phase (to jam data) [6]. Forthe first case, most idle channels are detected as busy by 𝐶 𝑇 and thus throughput is reduced almost to zero [7]. For thesecond case, many successful transmissions (if no jamming)are jammed and thus throughput is reduced significantly [56].In a causative attack , 𝐶 𝑇 is updated by additional trainingdata collected over time and 𝐴 attempts to manipulate this retraining process. For example, if 𝑇 transmits but doesnot receive an ACK, the prediction by 𝐶 𝑇 is incorrect andadditional training data is collected. 𝐶 𝑇 is expected to improvewith additional training data. However, 𝐴 can again eithertransmit in the sensing phase or in the data transmission phaseto manipulate the training process. For the first case, featuresare changed [7]. For the second case, many transmissions thatwould be otherwise successful are jammed and consequentlytheir labels are changed [56]. For both cases, throughput (usingupdated 𝐶 𝑇 ) is reduced significantly. Membership inference attack is based on the analysis ofoverfitting. Note that features can be either useful (used topredict the channel status) or biased (due to the differentdistributions of training data and general test data) information. 𝐶 𝑇 is optimized to fit on useful and biased information ( 𝐹 𝑢 and 𝐹 𝑏 ). Fitting on 𝐹 𝑏 corresponds to overfitting, which providescorrect classification on the given training data but wrongclassification on general test data. In a white box attack, 𝐴 studies parameters in 𝐶 𝑇 based on local linear approximationfor each layer and the combination of all layers. This approachbuilds a classifier for membership inference that can leakprivate information of a transmitter such as waveform, channel,and hardware characteristics by observing spectrum decisionsbased on the output of a wireless signal classifier [54].In the Trojan attack , 𝐴 slightly manipulates training databy inserting triggers to only few training data samples bymodifying their phases and changing the labels of thesesamples to a target label. This poisoned training data is usedto train 𝐶 𝑇 . In test time, 𝐴 transmits signals with the samephase shift that was added as a trigger during training time. 𝑅 accurately classifies clean (unpoisoned) signals withouttriggers, but misclassifies signals poisoned with triggers [75]. C. Defense Schemes against Adversarial Machine Learning
The basis of many attacks such as evasion and poisoningattacks discussed in Section III-A is the exploratory attack thattrains a functionally equivalent classifier 𝐶 𝐴 as a surrogatemodel for target classifier 𝐶 𝑇 . Once 𝐶 𝐴 is built, the adversarycan analyze this model to understand the behavior of 𝐶 𝑇 ,which paves the way for identifying the weaknesses of 𝐶 𝑇 and then designing subsequent attacks. Therefore, a defensemechanism is needed to mitigate the exploratory attack. One proactive defense mechanism is to add controlled randomnessto the target classifier 𝐶 𝑇 such that it is not easy to launch anexploratory attack. For that purpose, transmitter 𝑇 can transmitwhen channel is detected as busy or can remain idle whenchannel is detected as idle. However, such incorrect decisionswill decrease the system performance even without attacks.Thus, the key problem is how to maximize the effect ofdefense while minimizing the impact on system performance.In particular, our approach is to exploit the likelihood score,namely the confidence level, returned by the machine learningclassifier such that 𝑇 performs defense operations only whenthe confidence is high, thereby maximizing the utility ofeach defense operation [53]. This way, with a few defenseoperations, the error probability of exploratory attack can besignificantly increased [7] and subsequent attacks such as the ESC “Radar detected” SAS “Reconfigure 5G” gNodeB UE
Transmitter T Radar
Background traffic source B Receiver R Fig. 3. Scenario 1: Spectrum sharing of 5G with incumbent user (radar) inthe CBRS band. evasion attack in the sensing phase can be further mitigated.The number of defense operations can be adapted in a dynamicway by monitoring the performance over time.Possible approaches against the membership inference at-tack include the following two. The first approach aims tomake the distribution in training data similar to the generaltest data. When we apply 𝐶 𝑇 on some samples, if a sampleis classified with high confidence, it is likely that this samplecontributes to overfitting in the training data and it is removedto make the training data similar to general test data. Thesecond approach aims to remove the biased information in 𝐹 𝑏 . We can analyze the input to any layer in the deep neuralnetwork and identify features that play an important role in 𝐹 𝑏 . Then, we can rebuild 𝐶 𝑇 on features other than identifiedones to remove the impact of overfitting.Trojan attacks can be detected by identifying potentialtriggers inserted into training data. Since all malicious sam-ples have a particular trigger, we can apply outlier detectionmethods such as the one based on Median Absolute Deviation(MAD) or clustering to detect this trigger in the Trojan attack.Once the trigger is detected, any sample with this trigger isdiscarded or its label is switched to mitigate the attack [75].IV. A DVERSARIAL M ACHINE L EARNING IN
OMMUNICATIONS
We present two scenarios to demonstrate adversarial ma-chine learning-based attacks on 5G systems. The details ofthe first and second scenario and the performance results arepresented in Section IV-A and Section IV-B, respectively.
A. Scenario 1 - Adversarial Attack on 5G Spectrum Sharing1) Attack Setting:
The operation of 5G communicationssystems is expected to cover the CBRS band, where 5G usersneed to share the spectrum with the radar signal. The radaris the incumbent (primary) user of the band and the 5Gcommunications system is the secondary user. 5G transmitter( 𝑇 ), namely 5G gNodeB, and receiver ( 𝑅 ), namely 5G UE,need to communicate when no radar signal is detected in theband as the background traffic source 𝐵 . The ESC systemsenses the spectrum, decides whether the channel is idle orbusy by a machine learning-based classifier 𝐶 𝑇 , and informsits decisions to the SAS. SAS informs 𝑇 and if the channel isidle, 𝑇 transmits data. 𝑅 sends an ACK once it receives datafrom 𝑇 . This procedure is shown in Fig. 3. An adversary 𝐴 also senses the spectrum and decides when to perform certainattack actions, as shown in Fig. 4.We consider the following attack actions built upon ad-versarial machine learning. First, the adversary 𝐴 trains thesurrogate model 𝐶 𝐴 in the form of an exploratory attack, as shown in Fig. 4. Then, 𝐴 uses the surrogate model to decidewhen and how to interfere with incumbent user’s spectrumaccess process, as shown in Fig. 5. 𝐴 aims to either jam datatransmissions such that 𝑅 cannot receive data transmissionfrom 𝑇 or jam the spectrum sensing period such that an idlechannel is considered as busy. The first part is a conventionaljamming attack and the second part is fooling 𝑇 into wastingtransmit opportunities. The second part corresponds to anevasion attack as it manipulates the sensing inputs into themachine learning algorithm used by 𝑇 for spectrum accessdecisions. ESC “Radar detected”
SAS “Reconfigure gNodeB UE
Transmitter T Radar
Background traffic source B Receiver R Surrogate model
Adversary A Target model
Fig. 4. Scenario 1 - Attack Step 1: Adversary trains adversarial deep learningclassifier as the surrogate model to infer the ongoing transmit pattern of theincumbent. “Reconfigure 5G”
ESC “Radar detected”
SAS gNodeB UETransmitter T RadarBackground traffic source B Receiver R Surrogate model
Adversary A Target model
Fig. 5. Scenario 1 - Attack Step 2: Adversary jams the incumbent transmis-sions by using its surrogate model.
Since the adversary 𝐴 only needs to attack when there willbe a successful transmission (if there was no attack), it aimsto decide whether there is an ACK returned by the receiver 𝑅 and uses the presence or absence of ACKs as labels totrain its machine learning-based classifier 𝐶 𝐴 . The processof building 𝐶 𝐴 is not exactly the same as the conventionalexploratory attack (such as the one considered in computervision applications), where 𝐶 𝐴 should be the same (or assimilar as possible) as 𝐶 𝑇 . Due to different spectrum sensingresults at different locations, the inputs to 𝐶 𝐴 and 𝐶 𝑇 aredifferent. Further, the output of 𝐶 𝐴 (i.e., ‘ACK’ or ‘no ACK’)is different than the output of 𝐶 𝑇 (i.e., ‘idle’ or ‘busy’).
2) Simulation Setup and Performance Results:
For thisattack, we set up a scenario, where the distance from 𝑇 to 𝐵 is m and the distance from 𝐴 to 𝐵 is m. 𝑇 builds 𝐶 𝑇 based on the sensed signal powers, namely the RSSIs. Eachdata sample consists of sensing results and 𝑇 collects of those samples. Then, one half of these samples are usedfor training and the other half are used for testing. We trainand test classifiers in TensorFlow and consider the followingclassifier characteristics. LPDC Coding 16-QAM modulation CP-OFDM
Channel
Received 5G Signal + AWGN
Fig. 6. Generating 5G NR signal at the UE. • A feedforward neural network is trained for each classi-fier with backpropagation algorithm by using the cross-entropy loss function. • Rectified linear unit (ReLU) activation function is usedat the hidden layers. • Softmax activation function is used at the output layer. • Batch size is . • Number of training steps is .The deep neural network structure of classifier 𝐶 𝑇 is givenas follows. • The input layer has neurons. • The first hidden layer is a dense layer of neurons. • The second hidden layer is a dropout layer with dropoutratio of . . • The third hidden layer is a dense layer of neurons. • The fourth hidden layer is a dropout layer with dropoutratio of . . • The output layer has two neurons.The monostatic radar signal is simulated in MATLAB asthe background signal. Free space model is used to calculatethe propagation loss between 𝐵 and 𝑇 . The classifier 𝐶 𝑇 hasvery good performance in the absence of adversaries. It cancorrectly detect all idle channel instances and most busy chan-nel instances. The error on busy channel detection is . %.That is, the 5G system can successfully protect . % of radartransmissions while achieving % throughput (normalizedby the best throughput that would be achieved by an idealalgorithm that detects every idle channel correctly).The 5G communications system in this scenario uses 5GNew Radio (NR) signal. The 5G NR signal is generatedusing MATLAB 5G Toolbox. The steps used to generatethe 5G NR signal are shown in Fig. 6. The signal includesthe transport (uplink shared channel, UL-SCH) and physicalchannels. The transport block is segmented after the cyclicredundancy check (CRC) addition and low-density parity-check (LDPC) coding is used as forward error correction.The output codewords are 16-QAM modulated. Both data andcontrol plane information is loaded to the time-frequency gridof the 5G signal. Orthogonal frequency-division multiplexing(OFDM) modulation is used with inverse Fast Fourier Trans-form (IFTT) and Cyclic Prefix (CP) addition operations. Thetransmit frequency is set to GHz. The subcarrier spacing is kHz. The number of resource blocks used in the simulationsis . The transmitted waveform is passed through a tappeddelay line (TDL) propagation channel model. The delay spreadis set to x − seconds. Additive white Gaussian noise(AWGN) is added to the signal in order to simulate the signalreceived at the receiver.The adversary 𝐴 collects signal samples. Each sampleincludes sensed signal powers as features and ‘ACK’ or ‘noACK’ as labels. We split data samples by half for training and testing of classifier 𝐶 𝐴 . The hyperparameters of 𝐶 𝐴 arethe same as 𝐶 𝑇 except that the input layer has neurons.The classifier 𝐶 𝐴 can correctly detect all ACK instances andmost no-ACK instances. The error probability when there isno ACK is . %. Once 𝐶 𝐴 is built, the adversary 𝐴 cansuccessfully jam all data transmissions or all pilot signalsfrom 𝑇 . Moreover, the unnecessary jamming is minimized, i.e.,among all jamming transmissions only . % are performedwhen there is no ACK.As mentioned earlier, 𝐴 can jam either 𝑇 ’s data transmissionor 𝑇 ’s pilot signal. In general, 𝑇 ’s data transmission period ismuch longer than 𝑇 ’s pilot signal. Thus, jamming of the pilotsignal is more energy efficient. For example, if the length ofdata transmission is nine times of the length of pilot signal andthe adversary has the energy to jam data transmissions for %of time slots, the adversary cannot jam all data transmissionsand can only reduce the throughput by . %. With the sameamount of energy consumption, the adversary can jam all pilotsignals and thus reduce the throughput by %. B. Scenario 2: Adversarial Attack on Signal Authentication inNetwork Slicing1) Attack Setting:
The second scenario considers a spoofingattack on the network slicing application. A classifier is trainedat the 5G gNodeB to detect 5G UEs by their signals and thenprovide certain services after authenticating them, as shown inFig. 7. The 5G gNodeB trains the classifier 𝐶 𝑆 based on the I/Qdata that includes both signal power and phase to distinguishsignals from a target 5G UE and random noise signals. Forthat purpose, we use the same deep neural network structureas in scenario 1 except that classifier 𝐶 𝑆 has the input layerof neurons. gNodeB “not authenticated”“authenticated” Adversary
Deep Neural Network UE Fig. 7. Scenario 2: 5G signal authentication.
An adversary 𝐴 aims to transmit similar signals to gainaccess to 5G-enabled services. For this purpose, it can sensethe spectrum to (i) collect signal samples (I/Q data) and (ii)identify whether such a signal is classified as a target user’ssignal by either monitoring feedback from the 5G gNodeB(regarding which 5G UE is authenticated) or observing which5G UE starts communicating to the 5G gNodeB as an authen-ticated user. Once sufficient 5G signal samples are collected,adversary 𝐴 can apply the GAN to generate synthetic 5G dataas spoofing signals and then transmit them to gain accessto 5G-enabled services. In this attack (shown in Fig. 8), 𝐴 consists of a pair of transmitter and receiver. Adversarytransmitter 𝐴 𝑇 trains a neural network to build the generator ofthe GAN and adversary receiver 𝐴 𝑅 trains another deep neuralnetwork to build the discriminator of the GAN. The onlyfeedback from 𝐴 𝑅 to 𝐴 𝑇 during the training is whether the signal is transmitted from the 5G UE or 𝐴 𝑇 . For that purpose, 𝐴 𝑇 sends a flag along with its signal to 𝐴 𝑅 to indicate itstransmissions and this flag is used to label samples. This attackprocess is illustrated in Fig. 8. Note that this spoofing attackserves the same purpose as an evasion attack, namely foolingthe 5G gNodeB into making wrong classification decisions.The only difference is that instead of adding perturbationson top of real transmissions (by jamming the channel as inScenario 1), the adversary 𝐴 generates new synthetic signalsand transmits them directly to the 5G gNodeB. Adversary RX AdversaryTXUE “authenticated”gNodeB
Fig. 8. Scenario 2 - Attack: Adversary trains a GAN over the air to generatespoofing signals and transmits them to infiltrate the 5G signal authentication.
2) Simulation Setup and Performance Results:
The deepneural network structure for the generator at 𝐴 𝑇 is given asfollows. • The input layer has neurons. • There are three hidden layers, each a dense layer of neurons. • The output layer has neurons.The deep neural network structure for the discriminator at 𝐴 𝑅 is the same as the generator except that its output layerhas two neurons.For this attack, we set up a scenario, where we vary thenoise power with respect to the minimum received 5G signalpower at the 5G gNodeB and the corresponding signal-to-noise-ratio (SNR) is denoted by 𝛾 (measured in dB). The 5GgNodeB builds its classifier 𝐶 𝑆 by using samples, half fortraining and half for testing. We vary 𝛾 as − dB, dB, and dB. No matter which 𝛾 value is used, 𝐶 𝑆 can always be builtperfectly, i.e., there is no error in distinguishing 5G signalsfrom other (randomly generated) signals. Adversary 𝐴 collects samples (5G signals and other signals), applies the GANto generate synthetic data samples, and transmits them to gainaccess to services, as shown in Fig. 8. The success probabilityis shown in Table 14.1. When 𝛾 = dB, the success probabilityreaches . Note that this approach matches all waveform,channel and radio hardware effects of 5G UE’s transmissionsas expected to be received at the 5G gNodeB. Therefore, thisattack performance cannot be achieved by replay attacks thatamplify and forward the received signals. C. Defense against Adversarial Machine Learning in 5GCommunications
We use the attack scenario 2 (adversarial attack on 5Gsignal authentication) as an example to discuss the defense.
TABLE IS
POOFING ATTACK PERFORMANCE .5G signal SNR, 𝛾 (in dB) Attack success probability − . % . % . % AdversaryRX gNodeBAdversaryTX Intentionally wrong decision: UE is not authenticatedUE
Fig. 9. Scenario 2 - Defense: Controlled errors introduced at 5G gNodeB aspart of defense.
One proactive defense approach at the 5G gNodeB is tointroduce selective errors in denying access to a very smallnumber of requests from intended 5G UEs (i.e., false alarmprobability is slightly increased). Note that no errors are madein authenticating non-intended users (i.e., misdetection erroris not increased). This is not a deterministic approach suchthat an intended 5G UE that is denied request in one instancecan be authenticated in its next access attempt. The controllederrors made by the 5G gNodeB are inputs to the adversaryand thus it cannot train a proper GAN to generate synthetic5G signals, i.e., spoofed signals can be reliably detectedand denied access. Fig. 9 shows the scenario for defense.Given that such defense actions (selective errors) would alsodecrease the system performance (as very few intended 5GUEs may be denied access over time), the probability ofdefense actions should be minimized. Thus, the 5G gNodeBselects signal samples that are assigned high score as intendeduser by its classifier and denies access to their correspondingtransmitter. This approach misleads the adversary most, asthe uncertainty in determining the decision boundary in itsclassifier is maximized for a given number of defense actions.Let 𝑃 𝑑 denote the ratio of defense actions to all authen-tication instances. Table 14.2 shows that even a small 𝑃 𝑑 ,e.g., 𝑃 𝑑 = . , can significantly decrease the attack successprobability, where 𝛾 is fixed as − dB. On the other hand, thereis no need to use a large 𝑃 𝑑 , e.g., larger than . , since theattack success probability converges to roughly % quickly.A similar defense can be applied against scenario 1, wherethe 5G gNodeB deliberately makes a small number of wrongtransmit decisions when accessing the spectrum. Then, theadversary cannot train a proper surrogate model to launchsuccessful attacks on data transmission or spectrum sensing.V. C ONCLUSION
The security aspects of machine learning have gained moreprominence with the increasing use of machine learning al-gorithms in various critical applications including wirelesscommunications. We first explained different attack types in
TABLE IID
EFENSE AGAINST SPOOFING ATTACK .Ratio of defense actions, 𝑃 𝑑 Attack success probability . % .
01 68 . % .
02 61 . % .
05 59 . % . . % . . % adversarial machine learning and corresponding defense meth-ods. Then, we focused on how adversarial machine learningcan be used in wireless communications to launch stealthyattacks. We described the challenges associated with designingattacks in wireless domains by accounting for differences fromother data domains and unique challenges. Next, we focusedon the vulnerabilities of the 5G communication systems dueto adversarial machine learning.We considered two 5G scenarios. In the first scenario, theadversary learns 5G’s deep learning-driven pattern of spectrumsharing with incumbent user such as in the CBRS band andjams the data and control signals to disrupt 5G communica-tions. Results show that the adversary can significantly reducethe throughput of the 5G communications while leaving onlya small footprint. In the second scenario, a spoofing attack isperformed by the adversary to pass through the deep learning-based physical-layer authentication system at the 5G gNodeB.A GAN is trained to generate the spoofing signal by matchingwaveform, channel, and radio hardware effects at the receiver.Results show that the attack is successful for a range ofSNRs of the 5G signal used during the training. Then, adefense technique is proposed such that controlled errors aremade by the 5G system deliberately to fool the adversaryinto training inaccurate models while minimizing negativeeffects on its own performance. Novel attacks presented in thispaper highlight the impact of adversarial machine learning onwireless communications in the context of 5G and raise theurgent need for defense mechanisms.R EFERENCES[1] C. Jiang, H. Zhang, Y. Ren, Z. Han, K. C. Chen, and L. Hanzo,“Machine learning paradigms for next-generation wireless networks,”
IEEE Wireless Communications , vol. 24, no. 2, 2016.[2] T. Erpek, T. O’Shea, Y. Sagdeuyu, Y. Shi, and T.C. Clancy, “Deeplearning for wireless communications,” Development and Analysis ofDeep Learning Architectures, Springer, 2019.[3] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learningat scale,” arXiv preprint, arXiv:1611.01236 , 2016.[4] Y. Vorobeychik and M. Kantarcioglu,
Adversarial machine learning ,Morgan & Claypool, 2018.[5] Y. Shi, Y. E. Sagduyu, K. Davaslioglu, and R. Levy, “Vulnerability de-tection and analysis in adversarial deep learning,”
Guide to VulnerabilityAnalysis for Computer Networks and Systems - An Artificial IntelligenceApproach , Springer, Cham, 2018.[6] T. Erpek, Y. E. Sagduyu, and Y. Shi, “Deep learning for launching andmitigating wireless jamming attacks,”
IEEE Transactions on CognitiveCommunications and Networking , vol. 5, no. 1, 2019.[7] Y. Sagduyu, Y. Shi, and T. Erpek, “Adversarial deep learning forover-the-air spectrum poisoning attacks,”
IEEE Transactions on MobileComputing , doi: 10.1109/TMC.2019.2950398[8] Y. E. Sagduyu, Y. Shi, T. Erpek, W. Headley, B. Flowers, G. Stantchev,and Z. Lu, “When wireless security meets machine learning: Motivation,challenges, and research Directions,” arXiv preprint arXiv:2001.08883 ,2020. [9] D. Adesina, C-C Hsieh, Y. E. Sagduyu, and L. Qian, “Adversarialmachine learning in wireless communications using RF data: A review,” arXiv preprint arXiv:2012.14392 , 2020.[10] “Citizens Broadband Radio Service.” Code of Federal Regulations. Title47, Part 96, June 2015.[11] M. R. Souryal and T. Nguyen, “Effect of federal incumbent activity onCBRS commercial service,” IEEE International Symposium on DynamicSpectrum Access Networks (DySPAN) , 2019.[12] W. M. Lees, A. Wunderlich, P. J. Jeavons, P. D. Hale, and M. R.Souryal, “Deep learning classification of 3.5-GHz band spectrogramswith applications to spectrum sensing,”
IEEE Transactions on CognitiveCommunications and Networking (TCCN) , vol. 5, no. 2, 2019.[13] S. Zhang, “An overview of network slicing for 5G,”
IEEE WirelessCommunications (TWC) , vol. 26, no. 3, 2019.[14] Y. Shi, T. Erpek, and Y. E. Sagduyu, “Reinforcement learning fordynamic resource optimization in 5G radio access network slicing,”
IEEE International Workshop on Computer Aided Modeling and Designof Communication Links and Networks (CAMAD) , 2020.[15] NGMN Alliance, “5G security recommendations Package 2: NetworkSlicing,” 2016.[16] I. J. Goodfellow, J. P.-Abadie, M. Mirza, B. Xu, D. W.-Farley, S. Ozair,A. C. Courville, and Y. Bengio, “Generative adversarial networks,”
Advances in Neural Information Processing Systems , 2014.[17] “Google Cloud Vision API,” available athttps://cloud.google.com/vision.[18] Y. Shi, Y. E. Sagduyu, K. Davaslioglu, and J. H. Li, “Generative adver-sarial networks for black-box API attacks with limited training data,”
IEEE International Symposium on Signal Processing and InformationTechnology (ISSPIT) , 2018.[19] Y. Shi, Y. E. Sagduyu, K. Davaslioglu, and J. Li, “Active deep learningattacks under strict rate limitations for online API calls,”
IEEE Sympo-sium on Technologies for Homeland Security (HST) , 2018.[20] T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, N. Carlini, P.Mishra, and W. Zhou, “Hidden voice commands,”
USENIX SecuritySymposium , 2016.[21] G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu, “Dolphinatack:Inaudible voice commands,” arXiv preprint, arXiv:1708.09537 , 2017.[22] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial machinelearning at scale,” arXiv preprint, arXiv:1611.01236 , 2016.[23] M. Barreno, B. Nelson, R. Sears, A. Joseph, and J. Tygar, “Can machinelearning be secure?”
ACM Symposium on Information, Computer andCommunications Security , 2006.[24] F. Tramer, F. Zhang, A. Juels, M. Reiter, and T. Ristenpart, “Stealingmachine learning models via prediction APIs,”
USENIX Security Sym-posium , 2016.[25] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. Celik, and A. Swami,“Practical black-box attacks against deep learning systems using adver-sarial examples,”
ACM Conference on Computer and CommunicationsSecurity (CCS) , 2017.[26] Y. Shi, Y. E. Sagduyu, and A. Grushin, “How to steal a machine learningclassifier with deep learning,”
IEEE Symposium on Technologies forHomeland Security (HST) , 2017.[27] X. Wu, M. Fredrikson, S. Jha, and J. F. Naughton, “A methodology forformalizing model-inversion attacks,”
Computer Security Foundations ,2016.[28] M. Nasr, R. Shokri, and A. Houmansadr, “Machine learning withmembership privacy using adversarial regularization,”
ACM Conferenceon Computer and Communications Security (CCS) , 2018.[29] K. Leino and M. Fredrikson, “Stolen memories: Leveraging modelmemorization for calibrated white-box membership Inference,”
USENIXSecurity Symposium , 2020.[30] L. Song, R. Shokri, and P. Mittal, “Privacy risks of securing machinelearning models against adversarial examples,”
ACM Conference onComputer and Communications Security (CCS) , 2018.[31] J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong, “MemGuard:Defending against black-box membership inference attacks via adver-sarial examples,”
ACM Conference on Computer and CommunicationsSecurity (CCS) , 2019.[32] Y. Shi and Y. E. Sagduyu, “Evasion and causative attacks with ad-versarial deep learning,”
IEEE Military Communications Conference(MILCOM) , 2017.[33] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. Celik, and A.Swami, “The limitations of deep learning in adversarial settings,”
IEEEEuropean Symposium on Security and Privacy , 2016.[34] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in thephysical world,” arXiv preprint, arXiv:1607.02533 , 2016. [35] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: Asimple and accurate method to fool deep neural networks,”
IEEEConference on Computer Vision and Pattern Recognition (CVPR) , 2015.[36] Y. Shi, K. Davaslioglu, and Y. E. Sagduyu, “Generative adversarialnetwork for wireless signal spoofing,”
ACM Workshop on WirelessSecurity and Machine Learning (WiseML) , 2019.[37] Y. Shi, K. Davaslioglu and Y. E. Sagduyu, “Generative adversarial net-work in the air: Deep adversarial learning for wireless signal spoofing,”
IEEE Transactions on Cognitive Communications and Networking , doi:10.1109/TCCN.2020.3010330.[38] K. Davaslioglu and Y. E. Sagduyu, “Generative adversarial learning forspectrum sensing,”
IEEE International Conference on Communications(ICC) , 2018.[39] L. Pi, Z. Lu, Y. Sagduyu, and S. Chen, “Defending active learningagainst adversarial inputs in automated document classification,”
IEEEGlobal Conference on Signal and Information Processing (GlobalSIP) ,2016.[40] S. Alfeld, X. Zhu, and P. Barford, “Data poisoning attacks againstautoregressive models,”
AAAI Conference on Artificial Intelligence ,2016.[41] J. Cohen, E. Rosenfeld, and Z. Kolter, “Certified adversarial robust-ness via randomized smoothing,”
International Conference on MachineLearning (ICML) , 2019.[42] Y. Shi, K. Davaslioglu, Y. E. Sagduyu, W. C. Headley, M. Fowler, andG. Green, “Deep Learning for Signal Classification in Unknown andDynamic Spectrum Environments,”
IEEE International Symposium onDynamic Spectrum Access Networks (DySPAN) , 2019.[43] S. Soltani, Y. E. Sagduyu, R. Hasan, K. Davaslioglu, H. Deng, and T.Erpek, “Real-time and embedded deep learning on FPGA for RF signalclassification,” IEEE Military Communications Conference (MILCOM),2019.[44] F. B. Mismar, B. L. Evans, and A. Alkhateeb, “Deep reinforcementlearning for 5G networks: Joint beamforming, power control, andinterference coordination,”
IEEE Transactions on Communications , vol.68, no. 3, 2019.[45] T. S. Cousik, V. K. Shah, J. H. Reed, T. Erpek, Y. E. Sagduyu, “Fastinitial access with deep learning for beam prediction in 5G mmWavenetworks,” arXiv preprint, arXiv:2006.12653 , 2020.[46] P. Dong, H. Zhang, G. Y. Li, I. S. Gaspar, and N NaderiAlizadeh, “DeepCNN-based channel estimation for mmWave massive MIMO systems,”
IEEE Journal of Selected Topics in Signal Processing , vol. 23, no. 11,2019.[47] T. ruber, S. Cammerer, J. Hoydis, and S. ten Brink, “On deep learning-based channel decoding,” Conference on Information Sciences andSystems (CISS), 2017.[48] K. Davaslioglu, S. Soltani, T. Erpek, and Y. E. Sagduyu, “DeepWiFi:Cognitive WiFi with deep learning,”
IEEE Transactions on MobileComputing , doi: 10.1109/TMC.2019.2949815.[49] N. Abu Zainab, T. Erpek, K. Davaslioglu, Y. E. Sagduyu, Y. Shi, S.Mackey, M. Patel, F. Panettieri, M. Qureshi, V. Isler, and A. Yener,“QoS and jamming-aware wireless networking using deep reinforcementlearning,”
IEEE Military Communications Conference (MILCOM) , 2019.[50] Y. E. Sagduyu and A. Ephremides, “A game-theoretic analysis of denialof service attacks in wireless random access,”
Wireless Networks , vol.15, no. 5, 2009.[51] Y. E. Sagduyu, R. Berry, and A. Ephremides, “Jamming games inwireless networks with incomplete information,”
IEEE CommunicationsMagazine , vol. 49, no. 8, 2011.[52] Y. E. Sagduyu, R. Berry and A. Ephremides, “Wireless jamming attacksunder dynamic traffic uncertainty,”
IEEE International Symposium onModeling and Optimization in Mobile, Ad Hoc, and Wireless Networks(WIOPT) , 2010.[53] Y. Shi, Y. E. Sagduyu, T. Erpek, K. Davaslioglu, Z. Lu, and J. Li, “Ad-versarial deep learning for cognitive radio security: Jamming attack anddefense strategies,”
IEEE ICC Workshop on Promises and Challengesof Machine Learning in Communication Networks (ML4COM) , 2018.[54] Y. Shi. K. Davaslioglu, and Y. E. Sagduyu, “Over-the-air membershipinference attacks as privacy threats for deep learning-based wirelesssignal classifiers,”
ACM Workshop on Wireless Security and MachineLearning (WiseML) , 2020.[55] Y. Shi, T. Erpek, Y. E. Sagduyu, and J. Li, “Spectrum data poisoning withadversarial deep learning,”
IEEE Military Communications Conference(MILCOM) , 2018.[56] Y. E. Sagduyu, Y. Shi, and T. Erpek, “IoT network security from theperspective of adversarial deep learning,”
IEEE SECON Workshop onMachine Learning for Communication and Networking in IoT (MLCN-IoT) , 2019. [57] M. Sadeghi and E. G. Larsson, “Adversarial attacks on deep-learningbased radio signal classification,” IEEE Wireless Communications Let-ters , vol. 8, no. 1, 2019.[58] B. Flowers, R. M. Buehrer, and W. C. Headley, “Evaluating adversarialevasion attacks in the context of wireless communications,” arXivpreprint, arXiv:1903.01563 , 2019.[59] M. Z. Hameed, A. Gyorgy, and D. Gunduz, “Communication withoutinterception: Defense against deep-learning-based modulation detec-tion,”
IEEE Global Conference on Signal and Information Processing(GlobalSIP) , 2019.[60] M. Z. Hameed, A. Gyorgy, and D. Gunduz, “The best defense is agood offense: Adversarial attacks to avoid modulation detection,”
IEEETransactions on Information Forensics and Security , vol. 16, 2021.[61] B. Flowers, R. M. Buehrer, and W. C. Headley, “Communicationsaware adversarial residual networks,”
IEEE Military CommunicationsConference (MILCOM) , 2019.[62] S. Kokalj-Filipovic and R. Miller, “Adversarial examples in RF deeplearning: Detection of the attack and its physical robustness,” arXivpreprint, arXiv:1902.06044 , 2019.[63] S. Kokalj-Filipovic, R. Miller, N. Chang, and C. L. Lau, “Mitigation ofadversarial examples in RF deep classifiers utilizing autoencoder pre-training,” arXiv preprint, arXiv:1902.08034 , 2019.[64] B. Kim, Y. E. Sagduyu, K. Davaslioglu, T. Erpek, and S. Ulukus,“Over-the-air adversarial attacks on deep learning based modulationclassifier over wireless channels,”
Conference on Information Sciencesand Systems (CISS) , 2020.[65] B. Kim, Y. E. Sagduyu, K. Davaslioglu, T. Erpek, and S. Ulukus,“Channel-aware adversarial attacks against deep learning-based wirelesssignal classifiers,” arXiv preprint arXiv:2005.05321 , 2020.[66] B. Kim, Y. E. Sagduyu, T. Erpek, K. Davaslioglu, S. Ulukus, “Ad-versarial attacks with multiple antennas against deep learning-basedmodulation classifiers,”
IEEE GLOBECOM Open Workshop on MachineLearning in Communications , 2020.[67] B. Kim, Y. E. Sagduyu, T. Erpek, K. Davaslioglu, and S. Ulukus,“Channel effects on surrogate models of adversarial attacks againstwireless signal classifiers,” arXiv preprint, arXiv:2012.02160 , 2020.[68] F. Wang, C. Zhong, M. C. Gursoy, and S. Velipasalar, “Defense strategiesagainst adversarial jamming attacks via deep reinforcement learning,”
Conference on Information Sciences and Systems (CISS) , 2020.[69] C. Zhong, F. Wang, M. C. Gursoy, and S. Velipasalar, “Adversar-ial jamming attacks on deep reinforcement learning based dynamicmultichannel access,”
IEEE Wireless Communications and NetworkingConference (WCNC) , 2020.[70] M. Sadeghi and E. G. Larsson, “Physical adversarial attacks against end-to-end autoencoder communication systems,”
IEEE CommunicationsLetters , vol. 23, no. 5, 2019.[71] B. Kim, Y. E. Sagduyu, K. Davaslioglu, T. Erpek, and S. Ulukus, ”Howto make 5G communications “invisible”: Adversarial machine learningfor wireless privacy,”
Asilomar Conference on Signals, Systems, andComputers , 2020.[72] Z. Luo, S. Zhao, Z. Lu, J. Xu, and Y. E. Sagduyu, “When attackersmeet AI: learning-empowered attacks in cooperative spectrum sensing,” arXiv preprint, arXiv:1905.014 , 2019.[73] Z. Luo, S. Zhao, Z. Lu, Y. E. Sagduyu, and J. Xu, “Adversarial machinelearning based partial-model attack in IoT,”
ACM Workshop on WirelessSecurity and Machine Learning (WiseML) , 2020.[74] Y. E. Sagduyu, “Securing cognitive radio networks with dynamic trustagainst spectrum sensing data falsification,”
IEEE Military Communica-tions Conference (MILCOM) , 2014.[75] K. Davaslioglu and Y. E. Sagduyu, “Trojan attacks on wireless signalclassification with adversarial machine learning,”