Fingerprinting Software-defined Networks
Heng Cui, Ghassan O. Karame, Felix Klaedtke, Roberto Bifulco
11 Fingerprinting Software-defined Networks
Heng Cui, Ghassan O. Karame,
Member, IEEE,
Felix Klaedtke, and Roberto Bifulco
Abstract —Software-defined networking (SDN) eases networkmanagement by centralizing the control plane and separatingit from the data plane. The separation of planes in SDN,however, introduces new vulnerabilities in SDN networks sincethe difference in processing packets at each plane allows anadversary to fingerprint the network’s packet-forwarding logic.In this paper, we study the feasibility of fingerprinting thecontroller-switch interactions by a remote adversary, whose aimis to acquire knowledge about specific flow rules that are installedat the switches. This knowledge empowers the adversary with abetter understanding of the network’s packet-forwarding logicand exposes the network to a number of threats. In our study,we collect measurements from hosts located across the globeusing a realistic SDN network comprising of OpenFlow hardwareand software switches. We show that, by leveraging informationfrom the RTT and packet-pair dispersion of the exchangedpackets, fingerprinting attacks on SDN networks succeed withoverwhelming probability. We also show that these attacks arenot restricted to active adversaries, but can be equally mountedby passive adversaries that only monitor traffic exchanged withthe SDN network. Finally, we discuss the implications of theseattacks on the security of SDN networks, and we present andevaluate an efficient countermeasure to strengthen SDN networksagainst fingerprinting. Our results demonstrate the effectivenessof our countermeasure in deterring fingerprinting attacks on SDNnetworks.
Index Terms —Software-defined networking, OpenFlow, Fin-gerprinting, Security.
I. I
NTRODUCTION S OFTWARE-DEFINED NETWORKING (SDN) [14], [27]eases the development and deployment of network ap-plications by defining a standard interface between the con-trol plane and the data plane. In SDN, the control planeis implemented by a logically centralized controller, whichinteracts over a bi-directional communication channel withthe data plane’s network devices. The controller can querydevices for their state, e.g., to acquire traffic statistics orinformation about the status of the switches’ ports, and modifytheir forwarding behavior, by installing and deleting flow rules.Network devices can also notify the controller about networkevents (e.g., the reception of certain packets) and device’s statechanges. For example, a number of advanced reactive controlplane logic implementations [11], [17], [35], [46] configurenetwork devices to send notification to the controller accordingto some installed policy (e.g., when a received packet doesnot match any of the installed flow rules). This notificationtriggers the controller to perform a series of operations, suchas installing the appropriate forwarding rules at the switches,reserve network resources on a given network’s path, etc.The separation of the control and data plane in SDN opensthe doors for a remote adversary to fingerprint the network.
H. Cui, G. O. Karame, F. Klaedtke, and R. Bifulco are with the NEC Labo-ratories Europe, Heidelberg, Germany. E-mail: fi[email protected].
In particular, whenever packet forwarding is performed inhardware, then packets at the data plane are processed severalorders of magnitude faster than at the software-based controlplane. This discrepancy acts as a distinguisher for a remoteadversary to learn whether a given probe packet is handledjust at the data plane or triggers an interaction between thedata plane and the control plane. An interaction providesevidence that the probe packet does not have any matchingflow rule stored at the switch’s flow table (or it requires specialattention from the controller). This knowledge empowers anadversary with a better understanding of the network’s packet-forwarding logic and, as we outline in this work, exposes thenetwork to a number of threats. In spite of the plethora of SDNsecurity solutions in the literature [5], [15], [33], [39], [40],[41], no contribution analyzes the feasibility and realizationof fingerprinting attacks on practical SDN deployments. More-over, there are no proposed solutions to alleviate fingerprintingattacks on SDN.In this paper, we address this problem and study thefingerprinting of controller-switch interactions by a remote ad-versary with respect to various network parameters, such as thenumber of hops in the communication path, and the data linkbandwidth. For that purpose, we collect measurements from20 different hosts located across the globe (Australia, Asia,Europe, and North America) using an SDN network compris-ing of several OpenFlow hardware and software switches. Ourresults show that, by leveraging information from the packet-pair dispersion of the exchanged packets, fingerprinting attackson SDN networks succeed with overwhelming probability. Forinstance, an adversary can correctly identify, with an accuracyof 98.54%, whether a probe packet triggers the installationof forwarding rules at three hardware switches in our SDNnetwork. Our results suggest that this fingerprinting accuracy isonly marginally affected by the number of hardware switchesthat need to be configured on the path, and by the datalink bandwidth. More surprisingly, our results also show thatthe presence of software switches, which process packets atthe software layer, does not hinder the ability of a remoteadversary in fingerprinting controller-switch interactions.We also show that fingerprinting attacks can be mounted bypassive adversaries that, e.g., capture a snapshot of the trafficexchanged with the SDN network. Although existing trafficmight not contain packet pair traces, our findings show that apassive adversary can leverage the RTT of packets (that areexchanged within a short time interval) to fingerprint the SDNnetwork with an accuracy up to 98.73%. The fingerprinting ac-curacy due to the RTT of packets is, however, largely affectedby the SDN network size, and significantly deteriorates withtime.This work extends our prior work in [2] (see Section VII fora detailed explanation of our extensions). To the best of our a r X i v : . [ c s . CR ] D ec knowledge, this is the first complete work which quantifies—by means of well defined metrics—the success of fingerprint-ing attacks using state-of-the-art OpenFlow hardware switches.We discuss the implications of our findings on the securityof SDN networks, and we show that fingerprinting attackscan expose the SDN network to a number of new threats thatare not encountered in traditional networks. For instance, anadversary that knows which packets cause an interaction withthe controller can, e.g., acquire evidence about the occurrenceof a particular communication event, or abuse this knowledgeto launch Denial-of-Service (DoS) attacks by overloading theswitches with bogus flow-table updates [22]. In light of ourfindings, we present and evaluate an efficient countermeasureto strengthen SDN networks against fingerprinting. Our evalu-ation shows that our countermeasure considerably reduces theability of an adversary to mount fingerprinting attacks on SDNnetworks.The remainder of this paper is organized as follows. InSection II, we define our problem statement. In Section III, wedescribe our setup, the performed experiments, and summarizethe collected data. In Section IV, we present and detail ourresults. In Section V, we analyze the implications of ourfindings on the security of SDN networks. In Section VI,we present and evaluate an efficient countermeasure to deterfingerprinting in SDN networks. In Section VII, we discussrelated work, and we conclude the paper in Section VIII.II. P ROBLEM S TATEMENT
Before describing the focus of our work, we give a briefrefresher on OpenFlow, a widely deployed realization of SDN.
A. Background
SDN separates the control and data planes by defining aswitch’s programming interface and a protocol to access suchinterface, i.e., the OpenFlow protocol [31]. The controllerleverages the OpenFlow protocol to access the switch’s pro-gramming interface and configure the forwarding behaviorof the switch’s data plane. The communication between thecontroller and switches is established using an out-of-bandcontrol channel.The core entities exposed by the OpenFlow switch’s pro-gramming interface are flow tables and flow rules. A flowtable of a switch is just a container for its flow rules, whichdefine the switch’s forwarding behavior. The controller canadd, delete, or modify flow rules of a switch’s flow table bysending an
OFPT FLOW MOD
OpenFlow message to the switch.The parameters of an
OFPT FLOW MOD message specify howthe flow table of the switch should be modified. A flow rule,for instance, provides a semantic like “if a network packet’sIP destination address is 1.2.3.4, then forward the packet toport 2.” In general, a flow rule contains a match set thatdefines the network packets to which the rule applies. It furthercontains an action set that defines the actions that shouldbe applied to such packets, for example, forward to port 2.Whenever a packet is received by a switch, the packet’s headeris used as a search key to retrieve the rule that applies tothe packet, by performing a lookup in the flow table. The lookup operation compares the packet’s header with the rules’match set to find the rule that matches the packet. Rulesare prioritized in case multiple rules match. For the casesin which the controller needs to inspect a network packet,before performing a forwarding decision and installing thecorresponding forwarding rules, OpenFlow defines a special“forward to controller” action. When this action is applied toa packet, the switch generates an
OFPT PACKET IN messagethat is sent to the controller. This message contains the originalpacket and some additional information, such as the switch andthe port ID onto which the packet was received.The
OFPT PACKET IN feature is used in basic networkcontrol logic implementations, such as the one of an Ethernetlearning switch. It is also used in more complex dynamiccontrol plane implementations. In both cases, the networkoperates as follows: a packet received by the switch generatesan
OFPT PACKET IN message; the controller receives and an-alyzes the message to take a forwarding decision; the decisionis finally implemented by sending
OFPT FLOW MOD messages,which install rules at the relevant switches. This ensures thatall similar packets, i.e., those that belong to the same networkflow, are forwarded directly by the switches with no furtherinteractions with the controller. Note that the controller canuse barrier messages to ensure that message dependenciesare met, e.g., when multiple switches are reconfigured bythe controller for handling a new network flow. When aswitch receives a barrier request (
OFPT BARRIER REQUEST ),the switch must finish all previously received messages beforeprocessing new messages. After processing all messages, itnotifies the controller by sending a barrier reply message(
OFPT BARRIER REPLY ). B. Problem Statement
The main objective of our work is to study the abilityof a remote adversary to identify whether an interactionbetween the controller and the switches (and a subsequent ruleinstallation) has been triggered by a given packet. The absenceof a controller-switch interaction typically provides evidencethat the flow rules that handle the received packet are alreadyinstalled at the switches. Otherwise, if a communication be-tween the controller and the switches is triggered, then thissuggests that the received packet requires further examinationby the controller, e.g., since it does not have any matchingentry stored at the switch’s flow table, or because the controllerrequires additional information before installing a forwardingdecision at the switches.In our study, we consider both active and passive adver-saries. We assume that an active adversary can compromise aremote client, inject probe packets of her choice, and capturethe timing of the corresponding responses issued by a server.In contrast, a passive adversary cannot inject packets in thenetwork but only monitors the exchanged traffic between theserver and the client. Notice that passive adversaries are hardto detect by standard intrusion detection systems since theydo not generate any extra network traffic.Our study focuses on answering the following questions: • Is it possible to remotely identify whether the installationof flow rules has been triggered by a given packet? • What is the accuracy of fingerprinting attacks in SDNnetworks? • What is the impact of the number of switches that needto be configured on the fingerprinting accuracy? • What is the impact of the data link bandwidth on thefingerprinting accuracy? • Is the fingerprinting accuracy affected by the presence ofsoftware switches in the SDN network? • How and to which extent can such fingerprinting attacksbe efficiently mitigated?III. E
XPERIMENTAL S ETUP
In this section, we detail our experimental setup. Thisincludes a description of our testbed, the used features, theconducted experiments, and the collected datasets.
A. Testbed
Our measurement setup is summarized in Figure 1. Thetestbed comprises three OpenFlow hardware switches (threeNEC PF5240 switches [29]) and one OpenFlow softwareswitch (an OpenVSwitch, version 2.3.1 [32]). The switchesare connected to the data plane over a 100 Mbps data channel.We also consider the case where the data channel bandwidthincreases to 1 Gbps. Note that, although our testbed only com-prises three hardware switches, it can emulate the processingof packets in many realistic datacenters. Recall that a conven-tional datacenter’s network typically consists of three layersof switches: top-of-rack, aggregation, and core [1]. Packetsare usually processed by at most one switch in each of theselayers, that is, each packet traverses (at most) three hops inthe datacenter’s network. The testbed’s switches interface witha Floodlight v0.9 controller [12], which runs on a computerwith a 6-core Intel Xeon L5640 2.26 GHz CPU and 24 GBof RAM. A legacy Ethernet switch bridges the connectionsbetween the OpenFlow switches and the controller. To emulaterealistic network load on the control channel, we limit thecontrol interface of the switches to 100 Mbps.The controller is configured to minimize the processingdelay for an incoming packet-in event, i.e., we only require thecontroller to perform a table lookup and retrieve pre-computedforwarding rules in response to packet-in events. Furthermore,the controller always performs bi-directional flow installation;that is, the handling of a packet-in event triggers the installa-tion of a pair of rules, one per flow direction, at each involvedswitch. We ensure that the controller’s CPU is not overloadedduring our measurements.We deploy a cross-traffic generator on an AMD dual coreprocessor running at 2.5 GHz to emulate realistic WAN trafficload on the switches’ ports that were used in our study. Thegenerated cross traffic follows a Pareto distribution with 20 msmean and 4 ms variance [7].To analyze the effect of the data link bandwidth on thefingerprinting accuracy, we bridge our SDN network to theInternet using 100 Mbps and 1 Gbps links (respectively), bymeans of a firewall running on an AMD Athlon dual coreprocessor 3800+ machine. For the purpose of our experiments,we collect measurement traces between an Intel Xeon E3-1230 3.20 GHz CPU server with 16 GB RAM and 20 remote clientsdeployed across the globe. Table I details the specificationsand locations of the clients used in our experiments. In ourtestbed, the server and the software switch were co-located onthe same machine.Note that, by reducing the time required for rule installationto a minimum, our testbed emulates a scenario that is partic-ularly hard for fingerprinting. In Section IV-C, we discuss theimplications of our setup on our findings.
B. Features
We define the network path P cs between a client c anda server s as a sequence of consecutive links P cs = (cid:104) L cs , . . . , L n cs (cid:105) , which are connected via network compo-nents. Similarly, we denote the reverse path P sc by P sc = (cid:104) L sc , . . . , L m sc (cid:105) . We denote by τ L i the transmission delay athop i ; that is, τ L i = SB Li , where S is the size of the packetand B L i is the capacity of L i . Furthermore, let d jL i refer tothe additional delay that is experienced by packet j whentraversing L i . The delay d jL i generally results from additionalqueuing exhibited by packet j due to cross-traffic at hop i .To identify a communication event between the switchesand the controller, we rely on two time-based features: packet-pair dispersion and RTT.
1) Packet-pair Dispersion:
The dispersion between twopackets sent by the client after a link L i cs refers to the timeinterval between the complete transmission of these packets on L i cs . When measuring the dispersion of a packet pair traversingan SDN network, two cases emerge. Case 1—Packets do not trigger rule installation :Assuming that two probe packets (labeled in the sequel by“1” and “2”) are sent by the client with an initial dispersion ∆ , and that they do not trigger any interaction on the controlplane, then the resulting dispersion measured after a link L i cs is given by [8], [18]: ∆ i = (cid:40) τ L i cs + d L i cs if τ L i cs + d L i cs ≥ ∆ ( i − ∆ ( i − + ( d L i cs − d L i cs ) otherwise. (1)In our setup, the client sends large packet pairs back-to-back in time with an initial dispersion ∆ = SB L cs . Thesepackets are then highly likely to queue at the bottleneck link(the link for which B cs i is minimal). Let min be the index ofthe bottleneck link on the internet path P cs . Following fromEquation 1, ∆ n (measured by the server) is then given by: ∆ n = SB csmin + d L mincs + n (cid:88) i = min +1 ( d L i cs − d L i cs ) , where d L mincs refers to the additional queuing delay that is ex-perienced by the second packet on the bottleneck link. Noticethat in the absence of cross-traffic, ∆ n ≈ SB csmin . If the serverimmediately issues small replies to the packets sent by theclient (e.g., by issuing ACKs), then these packets are unlikelyto queue on the reverse path P sc , and the measured dispersionbetween the reply packets will approximately correspond to ∆ n [8]. As shown in [8], [18], the packet-pair dispersion is a FirewallReceiver ControllerOpenFlowHardware Switch Cross-trafficGenerator100 Mbps 100 Mbps(or 1 Gbps)Internet SenderEthernetSwitchOpenFlowSoftware Switch OpenFlowHardware Switch OpenFlowHardware Switch
Fig. 1. Sketch of our measurement setup. Our testbed comprises three NEC PF5240 OpenFlow hardware switches and one OpenVSwitch (version 2.3.1). feature that is relatively stable over time (since it depends onthe bottleneck bandwidth of the path).
Case 2—Packets trigger rule installation : ∆ n is typi-cally in the order of tens of microseconds in current Internetpaths [19]. However, we expect ∆ n to increase (e.g., to theorder of few milliseconds) if the probe packet pair triggeredan interaction on the control plane. This is mainly due to the(relatively slow) handling of a notification by the controller. Namely, when the packet pair triggers an interaction on thecontrol plane, then: ∆ n = SB csmin + d L mincs + n (cid:88) i = min +1 ( d L i cs − d L i cs ) + max ∀ k δ ik . Here, δ ik refers to the delay introduced by a possible commu-nication between the controller and OpenFlow switch k on thepath between the sender and receiver. Since we assume thatthe controller installs bi-directional rules on all switches atonce, only the maximum installation delay is accounted (andis only witnessed when packets traverse P cs ).
2) Round Trip Times (RTT):
The RTT witnessed by apacket i sent from the client to the server is: RTT i = n (cid:88) j =1 ( τ L j cs + d iL j cs ) + m (cid:88) j =1 ( τ L jsc + d iL j sc ) + max ∀ k δ ik . (2)Clearly, if no communication between switch k and the con-troller occurs (e.g., forwarding rules are already installed), then δ ik = 0 . Since there might be more than one OpenFlow switchon P cs , RTT i depends on the maximum latency incurred by aswitch-controller interaction across all the OpenFlow switchesincluded in P cs .Since the RTT exhibited by packets largely depends on thegeographical location of hosts, and on the underlying networkcondition, we measure in our experiments the difference, δ RTT , between the RTT of two probe packets issued by thesame sender, i.e., δ RTT = RTT − RTT . This featuredoes not depend on the location of hosts, but is mainly Before any packet is forwarded by the switch, it undergoes the followingsteps: (i) the packet (or just its header) is transmitted to the controller; (ii) the controller performs a table-lookup in order to invoke the correspondingforwarding rule for the packet; (iii) the decision is transmitted to the involvedswitch(es) in the form of a flow table entry; (iv) the switch installs the entry,and, finally, the packet is forwarded by the switch. dominated by rule installation overhead and network jitter.Namely, following from Equation 2: δ RTT = max ∀ k δ k − max ∀ k δ k + n (cid:88) j =1 d iL j cs + m (cid:88) j =1 d iL j sc , where (cid:80) nj =1 d iL j cs + (cid:80) mj =1 d iL j sc is typically negligible. If bothpackets do not result in any rule installation, then δ RTT ≈ .Otherwise, if one of the packets triggers a rule installation,then | δ RTT | (cid:29) , since max ∀ k δ k (cid:29) or max ∀ k δ k (cid:29) . C. Data Collection
To collect timing information based on our features, wedeployed 20 remote clients across the globe (cf. Table I) thatexchange UDP-based probe packet trains with the local server.Notice that we rely on UDP for transmitting packets sinceInternet gateways may filter TCP SYN or ICMP packets.Each probe train consists of: • A CLEAR packet signaling the start of the measurements.Upon reception of this packet, the controller deletes allthe entries stored within the flow tables of the OpenFlowswitches in P cs . • After one second since the transmission of the CLEAR packet, the client transmits four MTU-sized packet pairs.Here, different packet pairs are sent with an additionalsecond of separation. • After one second since the transmission of the last packetpair, another
CLEAR packet is sent to clear all flow tables. • Two packets separated by one second finally close theprobe train.We point out that all of our probe packets belong to the samenetwork flow, i.e., they are crafted with the same packet header.For each received packet of every train, the local server issuesa short reply (e.g., a 64 bytes ACK). We maintain a detailed logof the timing information relevant to the sending and receptionof the exchanged probe packets. When measuring dispersion,we account for out-of-order packets; this explains negativedispersion values.For each of our 20 clients, we exchange 450 probe trainson the paths P cs and P sc to the server. Half of these probetrains are exchanged before noon, while the remaining half isexchanged in the evening. In our measurements, we vary thenumber of OpenFlow switches that need to be configured in Our experimental results show that one second is enough to account forrule installation on all four OpenFlow switches in our network.
TABLE IR
EMOTE CLIENTS USED IN OUR EXPERIMENTS . B
ANDWIDTHS ARE BASED ON ESTIMATES FROM THE CLOUD PROVIDERS .Location Profile Details A m az on Europe 1-core Intel Xeon 2.5 GHz CPU, 100-250 Mbps1-core Intel Xeon 2.5 GHz CPU, 100-250 Mbps1-core Intel Xeon 2.5 GHz CPU, 250 Mbps2-core Intel Xeon 2.5 GHz CPU, 250 Mbps4-core Intel Xeon 2.5 GHz CPU, 1 GbpsAustralia 1-core Intel Xeon 2.5 GHz CPU, 100-250 Mbps2-core Intel Xeon 2.5 GHz CPU, 250 MbpsNorth America 1-core Intel Xeon 2.5 GHz CPU, 100-250 Mbps1-core Intel Xeon 2.5 GHz CPU, 250 Mbps4-core Intel Xeon 2.5 GHz CPU, 1 Gbps A z u r e Europe 1-core AMD Opteron 2.1 GHz CPU, 30-40 Mbps † † † These bandwidths were obtained using measurements. reaction to the exchanged probe packets. Namely, we considerthe following four cases where a probe packets triggers thereconfiguration of some of the OpenFlow switches: (1) onehardware switch, (2) two hardware switches, (3) three hard-ware switches, and (4) the software switch. We remark thatthe choice of the configured hardware switches in our testbed(cf. Figure 1) has no impact on the measured features sincewe ensure that the remaining hardware switches have alreadymatching rules installed. Furthermore, we remark that packetsof a probe train only traverse the software switch in case (4),i.e., when it is configured. In total, our data collection phaselapsed from April 27, 2015 until October 27,2015, in which869,201 probe packets were exchanged with our local serverusing all clients/configurations, amounting to almost 0.66 GBof data.
D. Evaluation Metric
We evaluate two hypotheses based on our features: (i) thefirst hypothesis states that no rule installation was triggered byour probe packets and (ii) the second hypothesis correspondsto the conjecture that a rule was installed in reaction to ourprobes. Here, there are two possible errors: false match and false non-match . In our case, the former is equivalent to adecision that no rule was installed, while in reality our probestriggered the installation of a rule. The latter is equivalent to adecision that a rule was installed, while in reality no rule wasinstalled. The
False Match Rate (FMR) and
False Non-matchRate (FNR) represent the frequencies at which these errorsoccur. The
Equal Error Rate (EER), which is used as a singlemetric for the accuracy of an identification system [13], is therate at which FMR and FNR are equal. In the sequel, we usethe EER to evaluate the effectiveness of our features.We compute the EER as follows. We compute the
Prob-ability Distribution Function (PDF) of the measured valuesof our features (across all configurations and clients location)as described in Sections III-A and III-B. We then separate the PDFs in two categories: (i)
PDF N that contains allmeasurements obtained when our probes did not trigger a ruleinstallation, and (ii) PDF Y that contains those measurementsobtained when the probe packets caused a rule installation at k OpenFlow switches (with k = 1 , , hardware switches or k = 1 software switch). We then compute the rate of falselyaccepted and falsely rejected hypotheses given a threshold.The measurements from PDF N that are above this thresholdindicate the number of false rejects (FNR), and measurementsfrom PDF Y that are below the threshold indicate the numberof false accepts (FMR). Recall that the EER is the error ratewhere FNR and FMR are equal. The value of the EER-basedthreshold is our reference for an accept/reject decision. If thevalue of a measurement is smaller than the threshold, then weconjecture it belongs to PDF N ; otherwise, we conjecture thatit belongs to PDF Y .Note that EER values are between and . An EERvalue for a feature close to indicates that our hypothesescannot be distinguished from each other for the given feature.In particular, the value means that PDF N and PDF Y for the given feature completely overlap, and, based on thefeature, an adversary cannot distinguish at all whether a packettriggered a rule installation. Conversely, EER values close to and indicate that our hypotheses are distinguishablebased on our features, i.e., the fingerprinting accuracy is high.IV. E VALUATION R ESULTS
In this section, we present and analyze our experimentalresults using each of our proposed time-based features.
A. Packet-pair Dispersion Feature
In Figure 2, we show (i) the PDF of dispersion valuesfor which none of the packets of a pair triggered any ruleinstallation at the switches (referred to as
PDF N ), and (ii) the PDF of dispersion values for which the probes triggered arule installation (referred to as PDF Y ). P D F PDF N PDF Y (a) k = 3 hardware switches P D F PDF N PDF Y (b) k = 2 hardware switches P D F PDF N PDF Y (c) k = 1 hardware switch P D F PDF N PDF Y (d) k = 1 software switchFig. 2. Fingerprinting SDN networks (100 Mbps link) using packet-pair dispersions. In our plots, we assume a bin size of 250 µs . P D F PDF N PDF Y (a) k = 3 hardware switches P D F PDF N PDF Y (b) k = 2 hardware switches P D F PDF N PDF Y (c) k = 1 hardware switch P D F PDF N PDF Y (d) k = 1 software switchFig. 3. Fingerprinting SDN networks (1 Gbps link) using packet-pair dispersions. In our plots, we assume a bin size of 250 µs . Our results show that the sample mean of
PDF Y is con-siderably greater than that of PDF N ; PDF Y and PDF N aresignificantly different at 1% according to t-test [45] when thenumber k of OpenFlow hardware and software switches thatneed to be configured varies between 1 and 3. This is mainlydue to the fact that the delay required for rule installation, max ∀ k δ ik , acts as a strong distinguisher when measuring ∆ n .More specifically, our results show that across all locations theobtained EER are approximately 1% for k = 2 , hardwareswitches, 1.74% for k = 1 hardware switch, and 4.49% for k = 1 software switch. For example, when k = 3 hardwareswitches, the EER is calculated using a threshold of 1.43 ms;that is, 98.92% of all measured values in PDF N are below1.43 ms, and 98.92% of all measured values above 1.43 ms arecontained in PDF Y .As shown in Figure 3, our results are negligibly affectedby the data link bandwidth. Notably, the EER marginallyincreases by almost 0.2% for k = 2 , hardware switcheswhen the bandwidth of the data link increases from 100 Mbpsto 1 Gbps. In this setting, the EER decreases by 0.5% when k = 1 hardware or software switch. B. RTT Feature
In Figure 4, we plot the PDF of δ RTT values witnessed byprobe packets sent within a short time interval (i.e., by thelast two probes of our probe train sent 1 second apart) withrespect to a varying number of switches. Our results show that,irrespective of the number of switches, the PDF of all δ RTT values collected by our clients for which neither of the twoprobes triggered any rule installation on the switches (referredto as
PDF N ) can be fitted to a normal distribution with mean0. In contrast, the sample mean of PDF of δ RTT values forwhich only the first probe (
PDF Y ) incurred a rule installation is strictly greater than 0; PDF Y and PDF N are significantlydifferent at 1% according to t-test. The EER is approximately0.43% when k = 3 hardware switches, 0.13% when k = 2 hardware switches, 1.25% when k = 1 hardware switch, andincreases to 5.84% when k = 1 software switch.Similar to the dispersion feature, our results are littleaffected by the speed of the data link (cf. Figure 5). Morespecifically, the EER was unchanged for k = 3 hardwareswitches and marginally increased by almost 0.7% for k = 2 , hardware switches when the bandwidth of the data link in-creases from 100 Mbps to 1 Gbps. Given this change, the EERincreased by almost 1.5% when k = 1 software switch.We also compute the EER for δ RTT values measured overa 10 minute span. As shown in Figure 6, our results indicatethat δ RTT is not a stable feature over time; for example,when k = 1 , the EER deteriorates to approximately 5%for a hardware switch, and almost 15% when dealing witha software switch. To further study the stability of δ RTT over a longer period of time (three months), we conducteda separate experiment using four of our nodes located inEurope. Our results (cf. Figure 7) show that
PDF N and PDF Y are considerably less distinguishable when RTT valuesare collected over a larger time span; for example, the EERgrows to 39.5% for a time span of 3 months when k = 1 hardware switch (and to 25.17% and 24.83% when k = 2 and k = 3 , respectively). We believe that this discrepancy is dueto changing network conditions, which incur in non-negligibledifferences in RTT [38]. Namely, our results suggest that thechange of the RTT value by a few milliseconds, caused by e.g.,a change in the WAN path or traffic conditions, is comparableto the change in the RTT value introduced by an interactionwith the controller. −20 0 20 4000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (a) k = 3 hardware switches,time span 1 second −20 0 20 40 6000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (b) k = 2 hardware switches,time span 1 second −20 0 20 4000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (c) k = 1 hardware switch,time span 1 second −20 −10 0 10 20 3000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (d) k = 1 software switch,time span 1 secondFig. 4. Fingerprinting SDN networks (100 Mbps link) using δ RTT . In our plots, we assume a bin size of 250 µs . −20 0 20 4000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (a) k = 3 hardware switches,time span 1 second −20 0 20 40 6000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (b) k = 2 hardware switches,time span 1 second −20 0 20 4000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (c) k = 1 hardware switch,time span 1 second −20 −10 0 10 20 3000.10.20.30.40.5 δ RTT (ms) P D F PDF N PDF Y (d) k = 1 software switch,time span 1 secondFig. 5. Fingerprinting SDN networks (1 Gbps link) using δ RTT . In our plots, we assume a bin size of 250 µs . EE R ( % ) Fig. 6. Impact of the time span on the δ RTT feature.
C. Summary of Results
Our evaluation results in Figures 2, 3, 4, and 5 showthat fingerprinting attacks on SDN networks are feasible; infact, they are already realizable using simple features such aspacket-pair dispersions and RTTs.More specifically, our findings suggest that, irrespective ofthe number of OpenFlow switches that need to be configuredin reaction to a given probe packet, the delay introduced byrule installation, max ∀ k δ ik , provides an effective distinguisherfor an adversary to identify whether packets are only processedon the fast data plane, or triggers an interaction with thecontroller on the relatively slow software-based control plane.This delay is clearly distinguishable using the packet-pairdispersion, which is a stable feature over time, and is littleaffected by the size of the network (i.e., by the number ofOpenFlow switches that need to be configured).Although packet pairs can be easily crafted by an active adversary, packet pairs might not always be extractable from −20 0 2000.020.040.060.080.1 δ RTT (ms) P D F PDF N PDF Y Fig. 7. δ RTT over a period of 3 months when k = 1 hardware switch. Here,we assume a bin size of 250 µs . existing traffic by a passive adversary. However, a passiveadversary can monitor existing traffic for packets that sharea similar packet header, and are sent apart within a short timeinterval (e.g., within 10 minutes).Our findings show that the difference of measured RTTvalues between two such packets provides evidence for apassive adversary whether any of those packets triggered a re-action on the control plane. Passive fingerprinting is especiallydetrimental since it limits the applicability of existing intrusiondetection systems in detecting SDN fingerprinting attempts;indeed, passive network monitoring does not generate anyextra traffic and as such cannot be deterred by relying onanomaly detection. Notice, however, that the RTT featureconsiderably depends on the SDN network size and is lessstable over time when compared to packet-pair dispersions.For instance, the EER almost doubles in our experiments whenthe number of OpenFlow hardware switches that need to be EE R ( % )
100 Mbps1 Gbps (a) Dispersion feature. EE R ( % )
100 Mbps1 Gbps (b) δ RTT feature.Fig. 8. Fingerprinting SDN networks with respect to the number of switchesand the data link bandwidth. configured decreases from to .Our results also suggest that the data link bandwidth haslittle impact on the fingerprinting accuracy for both the RTTand the dispersion features. The presence of a software switchin the communication path, however, considerably deterioratesthe EER; even in such a setting, our results nevertheless showthat fingerprinting attacks can still be reliably mounted by aremote adversary.Table II and Figure 8 summarize our results. We arguethat our findings apply to other SDN networks in which therelative difference between the processing speed of packetsat the data plane, and at the control plane is even morepronounced. Recall that our testbed was devised to emulatea scenario that is particularly hard for fingerprinting. That is,the controller’s CPU was idle most of the time during themeasurements; the controller used pre-computed rules whenissuing forwarding decision and was connected to a smallnumber of switches (i.e., three); at the time of writing, thedeployed OpenFlow hardware switches are among the fastestin installing new flow rules; furthermore, we ensured thatthe switches’ flow tables were empty when performing themeasurements, obtaining a flow rule installation time in theorder of milliseconds [25], [28]. Hence, it is clear that thefingerprinting accuracy provided by our features only increaseswhen the controller is under heavy load, the data planebandwidth is larger (e.g., 10 Gbps), or the OpenFlow switchesrequire longer times to update their flow tables.Note, however, that our findings rely on the assumptionthat there is a single SDN network on the path to the server.Otherwise, while our features still fingerprint controller-switch interactions in more than one SDN network, they do not revealin which network the interactions take place.V. I MPLICATIONS
In the previous section, we showed that remote fingerprint-ing attacks on SDN networks are feasible and easily realizableby means of our proposed features. In what follows, wediscuss the implications of our findings on the security of SDNnetworks.
A. Rule Scanning
Based on our findings, a remote adversary can clearly inferwhether a flow rule has been already installed by the controllerin order to handle a specific type of traffic or route towards agiven destination. For example, the adversary can craft probepackets whose headers match the traffic type and/or destinationaddress and infer by measuring the timing of the packetswhether these packets triggered the installation of a rule.This provides a strong evidence for the adversary that e.g.,communication with the given destination address has recentlyoccurred. Depending on the underlying rule, the adversarymight also be able to infer the used network protocol, and thedestination port address. By doing so, the adversary obtainsadditional information about the occurrence of a particularcommunication event; for example, the adversary can inferwhether the destination address has recently established anSSL session to perform an e-banking transaction. Notice thatthis leakage is only particular to SDN networks, and does notapply to traditional networks.Moreover, the remote fingerprinting of rules enables theadversary to better understand the logic adopted by the con-troller in managing the SDN network. This includes inferringthe timeouts set for the expiry of specific rules, whether thecontroller aims at fine-grained or coarse-grained control inthe network, etc. Similar to existing port and traffic scanners,this knowledge can empower the adversary with the necessarymeans to compromise the SDN network. Even worse, theadversary can leverage this knowledge in order to attackother networks which implement a similar rule installationlogic. For instance, in a geographically dispersed datacenter,different sub-domains typically implement the same policies.The adversary can train using one sub-domain and leveragethe acquired knowledge in order to compromise another sub-domain.
B. Denial-of-Service Attacks
The rule space is a scarce resource in existing hard-ware switches. Namely, state-of-the-art OpenFlow hardwareswitches can only accommodate few tens of thousandsrules [20], and only support a limited number of flow-tableupdates per second [3], [25]. While these limitations canbe circumvented by means of a careful design of the ruleinstallation logic, an adversary that knows which packets causean interaction with the controller can abuse this knowledge tolaunch Denial-of-Service (DoS) attacks.For instance, an adversary might simply try to overload thecontroller with handling packet-in events. More specifically,
TABLE IIS
UMMARY OF OBTAINED
EER S .
100 Mbps data link 1 Gbps data link k = 3 HW k = 2 HW k = 1 HW k = 1 SW k = 3 HW k = 2 HW k = 1 HW k = 1 SWPacket-pair Dispersion EER 1.08% 0.94% 1.74% 4.49% 1.24% 1.19% 1.25% 3.87%Threshold 1.43 ms 1.37 ms 1.17 ms 0.37 ms 1.42 ms 1.36 ms 0.88 ms 0.20 ms δ RTT , time span 1 second EER 0.43% 0.13% 1.25% 5.84% 0.45% 0.84% 2.04% 7.25%Threshold 4.63 ms 4.27 ms 2.13 ms 0.84 ms 4.60 ms 4.85 ms 2.18 ms 0.85 ms the adversary sends packets, where each of them most likelytriggers a controller-switch interaction. Too many such inter-actions will overload the controller.Another kind of DoS attack is to fill up the switches’flow tables. An analogy to this is when a computer runsout of memory and starts swapping. Usually, the computerbecomes unusable. Similarly, the network performance isseverely harmed when the flow tables are full (or even almostfull). First, installing flow rules in an almost full table is morecostly than in an almost empty flow table. Second, in casethe flow table is full, either new network flows cannot beestablished, which would already be a DoS, or some installedflow rules need to be deleted. However, in general, it is notobvious which rules should be deleted to make room for newrules; this needs to be coordinated by the controller and is acomplex operation, which can quickly overload the controllerand the switches. For example, the deletion of a rule ofan ongoing network flow might entail the rule’s immediatereinstallation. This can escalate and the controller will have toconstantly delete and reinstall rules.An adversary can make both kinds of DoS attacks morelikely to succeed by first passively fingerprinting the networktraffic, instead of blindly guessing which packets trigger acontroller-switch interaction.VI. P
ROPOSED C OUNTERMEASURE
In this section, we present an efficient countermeasure toprevent fingerprinting attacks on SDN networks. We alsoevaluate the effectiveness of our countermeasure in our testbed.
A. Our Countermeasure
One possible countermeasure against fingerprinting wouldbe for the switches to delay every received packet beforeforwarding it. This countermeasure is clearly inefficient as itwould severely harm network performance. Randomly deletingflow rules and reinstalling them when receiving the corre-sponding packet-in events also does not solve the problemeither. First, additional interactions between the controller andthe switches are introduced, resulting in an additional burdenon the controller. Second, installing flow rules is a costlyswitch operation. In what follows, we sketch an efficientcountermeasure, which relies on only delaying the first fewpackets of a network flow.Our proposal does not concern the handling of new flows,but focuses on processing packets which pertain to existingflows. Namely, for a packet of an existing flow, we leveragethe group table [31] and the internal timer maintained by aswitch to identify whether this flow has recently appeared.The group tables are used in OpenFlow switches to describe per-packet forwarding actions. They allow one to realizeforwarding strategies, such as ECMP [44], which could not beachieved using the flow table abstraction that describes onlyper-flow forwarding actions. A group table contains one ormore buckets , which in turn contain an action set, similar tothe one contained in the flow rules. A group table is furtherassociated with a bucket selection logic, which is related tothe group table type . For example, a group table of type “fastfailover” implements a selection logic that associates eachbucket to a switch’s port. Then, the logic selects the first bucketin the table whose associated switch’s port status is live . Theaction group in a flow rule’s action set enables one at selectingwhich packets should be processed by which group.Our proposed countermeasure (cf. Figure 9) defines a newbucket selection logic for the group table, such that packetsof active flows are immediately forwarded, while packets ofinactive flows are forwarded onto a special port that connectsthe switch to a network delay element. Our selection logicconsiders a flow to be inactive if no packets for such a flowwere received by the switch in a threshold amount of time, T th , which is measured (in seconds) by the switch’s internaltimer.The first received packet of an inactive flow is delayed by δ RTT ≈ max ∀ k δ ik , which gives the adversary little advantagein identifying whether the additional delay measured by theRTT feature is caused by a controller-switch interaction or isartificially introduced by our countermeasure. Moreover, allpackets of the same flow received within a short time window W are also delayed by a small ∆ ; this procedure preventsfingerprinting attempts that leverage the dispersion feature. Asshown in Section VI-B, ∆ and δ RTT can be fitted to pre-determined distributions, depending on the network size andthe number of hops on the communication path. Alternatively,the controller can estimate the distributions corresponding to ∆ and δ RTT through a feedback loop.Notice that our countermeasure is unlikely to deterioratenetwork performance, since only few packets per flow aredelayed by few milliseconds (cf. Section VI-B). We furtherremark that our proposal requires minor modifications—whichare supported to a large extent already in the OpenFlow v1.3specification—by the switches’ manufacturers. As such, weargue that our proposal can be efficiently implemented (inhardware) within the switches.
B. Evaluation
We evaluate the effectiveness of our proposed countermea-sure using the tested described in Section III-C. For thatpurpose, we connect a delay element, running on Intel Xeon2.8 GHz CPU with 4 GB of RAM, to a reserved port on T>T th Match set Action set
Group (3) * To Controller .... ....
Buckets Active flows bucketInactive flows bucket
Forward to port 2
Flow Table Group TableOpenFlow Switch .... ....Destination IP = 1.2.3.4 Delay and forward to port 2
GroupIdentifier Description Action set
Fig. 9. Sketch of our countermeasure. The packets with destination IP 1.2.3.4 are processed by the group table 3, which implements the bucket selectionlogic specified by our countermeasure. If no packets for this network flow are processed by the switch for a time
T > T th , then the next few packets of theflows are delayed by an appropriate amount. FirewallOpenFlowHardware Switch Internet SenderDelay ElementRest of the Network delay based on measurement delay based ondispersion measurement
RTT
Fig. 10. Modified testbed used to evaluate our countermeasure. the outermost switch; as described later, this element delaysnetwork packets by a specified amount before outputting themback on another port of the switch (cf. Figure 10).Since our countermeasure requires a modification to thebucket selection logic by the switches’ manufacturers, weemulate this logic by tagging the first few packets of inactiveflows, and by pre-installing a rule which forwards such taggedpackets to the delay element.Packets are delayed by the delay element according to apre-determined distribution. To select the best-fit distribution,we applied the Kolmogorov-Smirnov test [24] on a num-ber of candidate distributions, such as Pareto, GeneralizedPareto, Weibull, using our collected measurements as groundtruth. Our results show that the Generalized Pareto distribu-tion achieves the highest Kolmogorov-Smirnov score test (ofapproximately 0.09 for dispersion and 0.08 for δ RTT ) forboth our investigated features (cf. Figure 11). Recall that theGeneralized Pareto distribution is of the form: f ( ξ,µ,σ ) ( x ) = 1 σ (cid:0) ξ ( x − µ ) σ (cid:1) ( − ξ − Table III summarizes the parameters for Generalized Paretodistributions extracted from our measurements, and used bythe delay element to prevent fingerprinting attempts using thedispersion and the δ RTT features. Namely, in our counter-measure, the delay element inserts a delay before transmittingthe first packet by randomly sampling from a GeneralizedPareto distribution with parameters ξ = − . , σ = 10 . , and µ = 0 . , while all subsequent packets sent within an intervalof ms are delayed by randomly sampling from a General-ized Pareto distribution with parameters ξ = − . , σ = 2 . ,and µ = 0 . (to prevent fingerprinting using the dispersionfeature). Results : In the remainder of this section, we report on theeffectiveness of our proposed countermeasure using the testbed P D F Experimental delaysGeneralized Pareto (a) Dispersion feature. δ RTT (ms) P D F Experimental delaysGeneralized Pareto (b) δ RTT feature.Fig. 11. Fitting experimental data to the Generalized Pareto distribution. shown in Figure 10. Here, we collect our measurements usingthe same process described in Section III-C. Our results areshown in Table IV.Our results indicate that our countermeasure considerablyimpacts the fingerprinting accuracy of a remote adversaryusing the dispersion and δ RTT features. More specifically,our countermeasure increases the EER to almost 40% using TABLE IIIP
ARAMETERS OF THE G ENERALIZED P ARETO DISTRIBUTIONS USED BYTHE DELAY ELEMENT . ξ σ µ Packet-pair Dispersion -0.60 2.86 0.45 δ RTT -0.53 10.58 0.57
TABLE IVS
UMMARY OF MEASURED
EER
S USING OUR COUNTERMEASURE . k = 3 HW k = 2 HW k = 1 HW k = 1 SWPacket-pair Dispersion 39.76% 46.25% 61.18% 84.57% δ RTT the dispersion feature, and to 33% using the δ RTT featurewhen the network comprises three hardware switches. Ourcountermeasure, however, increases the EER to almost 84%(using both investigated features) when the network com-prises a software switch. Recall that the worst attainablefingerprinting accuracy in this case is when the EER is 50%which signals that the two distributions
P DF Y and P DF N completely overlap. In the case of a software switch, theEER increases to 84% which means that the adversary has anadvantage in distinguishing P DF Y from P DF N , in spite ofour countermeasure. We believe that this discrepancy mainlyoriginates from the fact that the estimated Generalized-Paretodistribution does not emulate well delays corresponding tosoftware switches (cf. Figure 12).Similarly, we also argue that lower fingerprinting accuraciescan be obtained with our countermeasure if the delay elementis equipped with fine-grained delay distributions with respectto the different number of hardware switches that need to beconfigured in the network. We validate this hypothesis in aseparate experiment. Here, we assume the delay element isequipped with best-fit estimates of the distributions of ruleinstallation delays exhibited by both our features with respectto the number of switches in the network, and we measurethe corresponding EER witnessed by a remote adversary inour testbed (cf. Figure 10). Our results in Figure 13 confirmour hypothesis, and show that when the delay element isequipped with fine-grained information about the distributionsof rule installation delays in the network, the EER is closerto 50%. For example, in this case, the EER increases toalmost 40% using both of our features when the networkcomprises a software switch, and is almost 47% when twohardware switches need to be configured. This shows thatour countermeasure considerably reduces the distinguishingadvantage of a remote adversary, when fine-grained delaydistributions are available to the delay element.VII. R ELATED W ORK
This paper extends our prior work on fingerprinting SDNnetworks [2]. The additional contributions are summarizedas follows. (i)
Our evaluation shows that fingerprinting ofSDN networks with software switches is also feasible. (ii)
We also investigate the fingerprinting accuracy when the linkbandwidth increases to 1 Gbps. (iii)
We conducted furthermeasurements to evaluate the effectiveness of fingerprintingSDN networks over a substantial period of time. (iv) We P D F Experimental delaysCoarse estimation (a) Dispersion feature. δ RTT (ms) P D F Experimental delaysCoarse estimation (b) δ RTT feature.Fig. 12. Software switch experimental delays vs. the Generalized Paretodistribution. discuss the implications of our findings on the security ofSDN networks. (v)
Finally, we analyze and evaluate ourcountermeasure.In the remainder of this section, we discuss related work inthe areas of network fingerprinting and SDN security.
A. Network Fingerprinting
Network fingerprinting has attracted considerable attentionin the research community. Markopoulou et al. [26] showthat network delays in backbone networks are relatively stableand are only marginally affected by network congestion.Dischinger et al. [6] show that network features such asbandwidth and delays mainly depend on the last-mile hopsin residential networks (e.g., due to ISP traffic shaping).Schulman and Spring [37] extend this observation by showingthat end-to-end delays in residential networks are largelyaffected by weather conditions. These findings confirm ourobservation that the RTT is not a stable feature over time.Packet-pair dispersion was first proposed to estimate avail-able bandwidth [16], [21], [43] and the bottleneck bandwidthof a network path [4], [9], [18], [19], [36]. Sinha et al. [42]observe that the distribution of packet-pair dispersions can beused to fingerprint the Internet paths. Karame et al. [18] showthat the packet-pair dispersion technique is a stable featurewhich can be used to characterize Internet paths.
B. SDN Security
A comprehensive survey of security issues in SDN can befound in [23]. EE R ( % ) Coarse estimationFine−grained estimationNo countermeasure (a) Dispersion feature. EE R ( % ) Coarse estimationFine−grained estimationNo countermeasure (b) δ RTT feature.Fig. 13. Impact of the estimation of the delay distribution on the measuredEER.
Shin and Gu [39] briefly hint on the possibility of finger-printing SDN networks by leveraging timing information ofthe exchanged packets; however, in contrast to our work, theirstudy is not based on a real-world evaluation and does notprovide any metrics to quantify fingerprinting accuracy. So-called topology attacks of SDN networks are studied in [5],[15]. Dhawan et al. [5] also discuss DoS attacks, similar to theones outlined in Section V. They describe an extension of thecontroller with a monitoring unit, which detects and reportsabnormal behavior.Shin et al. [40] outline a number of vulnerabilities incurrent SDN controllers such as Floodlight [12], Beacon [10],OpenDaylight [30], and POX [34]; these vulnerabilities al-low malicious applications to tamper with the internal datastructures maintained by the SDN controller in order to attackthe entire SDN network. The authors propose a sandboxapproach for network applications to deter this misbehav-ior. Further mechanisms for securing the control plane areproposed in [33]. Similarly, AVANT-GUARD [41] proposestwo data plane extensions to enhance the resilience of anSDN network against network flooding attacks and expediteaccess to critical data plane activity patterns. We point outthat these prior security solutions do not deter fingerprintingattacks on SDN networks. To the best of our knowledge,our countermeasure emerges as the only workable solutionto alleviate SDN fingerprinting attacks. Moreover, we are thefirst to discuss information leakage concerning the packet-forwarding logic in SDN. VIII. C
ONCLUSION
In this paper, we studied the fingerprinting of SDN net-works by a remote adversary. For that purpose, we collectedmeasurements from a large number of hosts located across theglobe using a realistic SDN network. Our evaluation showsthat, by leveraging information from the RTT and packet-pairdispersion of the exchanged packets, fingerprinting attacks onSDN networks succeed with overwhelming probability. Ourresults also suggest that fingerprinting attacks are not restrictedto active adversaries, but can also be mounted by passiveadversaries that capture a snapshot of the traffic exchangedwith the SDN network.Based on our results, we presented and evaluated a counter-measure that leverages the switches’ group tables in order todelay the first few packets of every flow. Our evaluation resultsshow that our countermeasure considerably reduces the abilityof an adversary to mount fingerprinting attacks against SDNnetworks. A
CKNOWLEDGMENTS
EFERENCES[1] M. F. Bari, R. Boutaba, R. P. Esteves, L. Z. Granville, M. Podlesny,M. G. Rabbani, Q. Zhang, and M. F. Zhani. Data center networkvirtualization: A survey.
IEEE Communications Surveys and Tutorials ,15(2):909–928, 2013.[2] R. Bifulco, H. Cui, G. O. Karame, and F. Klaedtke. Fingerprintingsoftware-defined networks. In
Proceedings of the 23rd InternationalConference on Network Protocols (ICNP) . IEEE Computer Society,2015.[3] R. Bifulco and M. Dusi. Position paper: Reactive logic in software-defined networking: Accounting for the limitations of the switches.In
Proceedings of the 3rd European Workshop on Software DefinedNetworks (EWSDN) , pages 97–102. IEEE Computer Society, 2014.[4] D. Croce, T. En-Najjary, G. Urvoy-Keller, and E. Biersack. Capacityestimation of ADSL links. In
Proceedings of the 4th ACM Conferenceon Emerging Network Experiments and Technology (CoNEXT) . ACMPress, 2008.[5] M. Dhawan, R. Poddar, K. Mahajan, and V. Mann. SPHINX: Detecingsecurity attack in software-defined networks. In
Proceedings of the 22ndAnnual Network and Distributed System Security Symposium (NDSS) .The Internet Society, 2015.[6] M. Dischinger, A. Haeberlen, K. P. Gummadi, and S. Saroiu. Char-acterizing residential broadband networks. In
Proceedings of the 7thACM SIGCOMM Internet Measurement Conference (IMC) , pages 43–56. ACM Press, 2007.[7] D. Dobre, G. Karame, W. Li, M. Majuntke, N. Suri, and M. Vukoli´c.PoWerStore: Proofs of writing for efficient and robust storage. In
Proceedings of the 2013 ACM SIGSAC Conference on Computer Com-munications Security (CCS) , pages 285–298. ACM Press, 2013.[8] C. Dovrolis, P. Ramanathan, and D. Moore. What do packet dispersiontechniques measure. In
Proceedings of the 20th Anual Joint Conferenceof the IEEE Computer and Communications Societies (INFOCOM) ,pages 905–914. IEEE Computer Society, 2001.[9] C. Dovrolis, P. Ramanathan, and D. Moore. Packet-dispersion techniquesand a capacity-estimation methodology.
IEEE/ACM Transactions onNetworking , 12(6):963–977, 2004.[10] D. Erickson. The Beacon OpenFlow controller. In
Proceedings of the2nd ACM SIGCOMM Workshop on Hot Topics in Software DefinedNetworking (HotSDN) , pages 13–18. ACM Press, 2013. [11] S. K. Fayazbakhsh, L. Chiang, V. Sekar, M. Yu, and J. C. Mogul.Enforcing network-wide policies in the presence of dynamic middleboxactions using FlowTags. In Proceedings of the 11th USENIX Symposiumon Networked Systems Design and Implementation (NSDI)
SIGCOMM Computer Communication Review , 38(3):105–110, 2008.[15] S. Hong, L. Xu, K. Wang, and G. Gu. Poisoning network visibility insoftware-defined networks: New attacks and countermeasures. In
Pro-ceedings of the 22nd Annual Network and Distributed System SecuritySymposium (NDSS) . The Internet Society, 2015.[16] N. Hu and P. Steenkiste. Evaluation and characterization of availablebandwidth probing techniques.
IEEE Journal on Selected Areas inCommunications , 21(6):879–894, 2003.[17] X. Jin, L. E. Li, L. Vanbever, and J. Rexford. SoftCell: Scalableand flexible cellular core network architecture. In
Proceedings ofthe 9th ACM Conference on Emerging Networking Experiments andTechnologies (CoNEXT) , pages 163–174. ACM Press, 2013.[18] G. O. Karame, B. Danev, C. Bannwart, and S. Capkun. On thesecurity of end-to-end measurements based on packet-pair dispersions.
IEEE Transactions on Information Forensics and Security , 8(1):149–162, 2013.[19] G. O. Karame, D. Gubler, and S. Capkun. On the security of bottleneckbandwidth estimation techniques. In
Proceedings of the 5th InternationalICST Conference on Security and Privacy in Communication Networks(SecureComm) , volume 19 of
Lecture Notes of the Institute for Com-puter Sciences, Social Informatics and Telecommunications Engineering ,pages 121–141. Springer, 2009.[20] N. Katta, J. Rexford, and D. Walker. Infinite CacheFlow in software-defined networks. In
Proceedings of the 3rd ACM SIGCOMM Workshopon Hot Topics in Software Defined Networking (HotSDN) . ACM Press,2014.[21] S. Keshav. A control-theoretic approach to flow control. In
Proceedingsof the 6th ACM SIGCOMM Conference on Communications Architectureand Protocols (SIGCOMM) , pages 3–15. ACM Press, 1991.[22] R. Kloti, V. Kotronis, and P. Smith. OpenFlow: A security analysis.In
Proceedings of the 21st IEEE International Conference on NetworkProtocols (ICNP) . IEEE Computer Society, 2013.[23] D. Kreutz, F. M. V. Ramos, P. J. Esteves Verissimo, C. Esteve Rothen-berg, S. Azodolmolky, and S. Uhlig. Software-defined networking: Acomprehensive survey.
Proceedings of the IEEE , 103(1):14–76, 2015.[24] Kolmogorov-Smirnov test. https://en.wikipedia.org/wiki/Kolmogorov-Smirnov test/.[25] A. Lazaris, D. Tahara, X. Huang, L. E. Li, A. Voellmy, Y. R. Yang, andM. Yu. Tango: Simplifying SDN programming with automatic switchbehavior inference, abstraction, and optimization. In
Proceedings ofthe 10th ACM Conference on Emerging Networking Experiments andTechnologies (CoNEXT) , pages 199–212. ACM Press, 2014.[26] A. Markopoulou, F. Tobagi, and M. Karam. Loss and delay measure-ments of internet backbones.
Computer Communications , 29(10):1590–1604, 2006.[27] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,J. Rexford, S. Shenker, and J. Turner. OpenFlow: Enabling innovationin campus networks.
SIGCOMM Computer Communication Review ,38(2):69–74, 2008.[28] J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, A. R. Curtis,and S. Banerjee. DevoFlow: Cost-effective flow management for highperformance enterprise networks. In
Proceedings of the 9th ACMSIGCOMM Workshop on Hot Topics in Networks (HotNets)
Proceedings of the8th ACM Workshop on Hot Topics in Networks (HotNets) . ACM Press,2009.[33] P. Porras, S. Cheung, M. Fong, K. Skinner, and V. Yegneswaran.Securing the software-defined network control layer. In
Proceedings ofthe 22nd Annual Network and Distributed System Security Symposium(NDSS)
Proceedings of theACM SIGCOMM Conference on Applications, Technologies, Architec-tures, and Protocols for Computer Communications (SIGCOMM) , pages27–38. ACM Press, 2013.[36] S. Saroiu, P. K. Gummadi, and S. D. Gribble. SProbe: A fast techniquefor measuring bottleneck bandwidth in uncooperative environments.http://sprobe.cs.washington.edu/sprobe.ps, 2002.[37] A. Schulman and N. Spring. Pingin’ in the rain. In
Proceedings of the11th ACM SIGCOMM Internet Measurement Conference (IMC) , pages19–28. ACM Press, 2011.[38] Y. Schwartz, Y. Shavitt, and U. Weinsberg. A measurement study ofthe origins of end-to-end delay variations. In
Proceedings of the 11thInternational Conference on Passive and Active Measurement (PAM) ,volume 6032 of
Lecture Notes in Computer Science , pages 21–30.Springer, 2010.[39] S. Shin and G. Gu. Attacking Software-Defined Networks: A FirstFeasibility Study. In
Proceedings of the 2nd ACM SIGCOMM Workshopon Hot Topics in Software Defined Networking (HotSDN) , pages 165–166. ACM Press, 2013.[40] S. Shin, Y. Song, T. Lee, S. Lee, J. Chung, P. Porras, V. Yegneswaran,J. Noh, and B. B. Kang. Rosemary: A robust, secure, and high-performance network operating system. In
Proceedings of the 2014 ACMSIGSAC Conference on Computer Communications Security (CCS) ,pages 78–89. ACM Press, 2014.[41] S. Shin, V. Yegneswaran, P. Porras, and G. Gu. AVANT-GUARD:Scalable and vigilant switch flow management in software-definednetworks. In
Proceedings of the 2013 ACM SIGSAC Conference onComputer Communications Security (CCS) , pages 413–424. ACM Press,2013.[42] R. Sinha, C. Papadopoulos, and J. Heidemann. Fingerprinting internetpaths using packet pair dispersion. Technical Report 06-876, Departmentof Computer Science, University of Southern California, 2006.[43] J. Strauss, D. Katabi, and F. Kaashoek. A measurement study ofavailable bandwidth estimation tools. In
Proceedings of the 3rd ACMSIGCOMM Internet Measurement Conference (IMC) , pages 39–44.ACM Press, 2003.[44] D. Thaler and C. Hopps. Multipath issues in unicast and multicast next-hop selection, 2000. https://tools.ietf.org/html/rfc2991.[45] t-test. Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Student%27s t-test.[46] A. Voellmy, J. Wang, Y. R. Yang, B. Ford, and P. Hudak. Maple: Sim-plifying SDN programming using algorithmic policies. In