More Than The Sum Of Its Parts: Exploiting Cross-Layer and Joint-Flow Information in MPTCP
Tanya Shreedhar, Nitinder Mohan, Sanjit K. Kaul, Jussi Kangasharju
aa r X i v : . [ c s . N I] N ov More Than The Sum Of Its Parts: ExploitingCross-Layer and Joint-Flow Information in MPTCP
Tanya Shreedhar † , Nitinder Mohan ‡ , Sanjit K. Kaul † and Jussi Kangasharju ‡† IIIT-Delhi, India ‡ University of Helsinki, Finland
ABSTRACT
Multipath TCP (MPTCP) is an extension to TCP which ag-gregates multiple parallel connections over available networkinterfaces. MPTCP bases its scheduling decisions on the in-dividual RTT values observed at the subflows, but does notattempt to perform any kind of joint optimization over thesubflows. Using the MPTCP scheduler as an example, in thispaper we demonstrate that exploiting cross-layer informationand optimizing scheduling decisions jointly over the multipleflows, can lead to significant performance gains. While ourresults only represent a single data point, they illustrate theneed to look at MPTCP from a more holistic point of viewand not treat the connections separately, as is currently be-ing done. We call for new approaches and research into howmultiple parallel connections offered by MPTCP should beused in an efficient and fair manner.
1. INTRODUCTION
The Transport Control Protocol (TCP) is an essential partof the Internet and is used by a majority of applications to re-liably transport their data over the network. Since its incep-tion, TCP has seen a lot of research, particularly regardinghow its performance and throughput could be improved [11,22,23]. More recent work has led to the development of Mul-tipath TCP (MPTCP), standardized by IETF, which is an ex-tension to standard TCP. It operates under the viewpoint thatend hosts like smartphones, servers, etc., are equipped withmultiple access interfaces such as Ethernet, WiFi, 3G/4G,Bluetooth and can form multiple parallel network paths be-tween communicating devices. Unlike a single path TCPconnection, MPTCP utilizes multiple parallel TCP subflowsbetween hosts via (all) available network interfaces for si-multaneous data transfer. It achieves robustness and resilienceto link failures, provides seamless connection handovers overdifferent network interfaces and aggregates throughput andbandwidth of the underlying TCP connections [3, 26]. Dueto the benefits offered by MPTCP, researchers have proposedutilizing it in datacenters [24], opportunistic networks [25]etc. The Linux implementation of the protocol has beenmade open-source to network researchers [21] and a com-mercial version is being used by Apple Inc. to support itsdigital assistant Siri in iOS and MacOS systems. Recently, S e n d B u ff e r M P T C P S c h e d u l e r TCPSubflow 1 TCPSubflow 2
Device Driver Queue 1Device Driver Queue 2
NIC 1NIC 2Network Stack
SendApp 1
Figure 1: Queues in MPTCP transmission pathApple Inc. has also made its MPTCP API open to iOS appli-cation developers such that they can fully utilize its capabil-ity [13].MPTCP protocol design is influenced by network com-patibility and ensuring fairness to existing TCP connections.Figure 1 shows the network stack of an MPTCP-compliantclient. MPTCP adds a scheduling layer over existing TCPconnections and routes application packets to one of the sub-flows based on a decision parameter. Efficient scheduling de-cisions can improve the delay performance of MPTCP. Sev-eral schedulers have been proposed by researchers but due toits compliance with modular TCP design, the default sched-uler injects packets on a subflow with the lowest smoothedTCP RTT (SRTT) value [19]. TCP RTT presents delayedinformation of the internal network behavior such as con-gestion, packet drops, queueing, etc. and does not explic-itly indicate the reason for the change in network conditions.Unlike traditional TCP, MPTCP has the capability to proac-tively switch TCP flows if it senses any issues on one ofthe flows. However, MPTCP primarily treats the individualstreams as separate entities and does not attempt to optimizeperformance across them.In this paper, we argue that MPTCP should take a morecomprehensive view over individual subflows and attemptto optimize the overall performance, as opposed to treatingthem as separate parallel entities. MPTCP currently is lim-ited to the TCP-level information provided by the individualTCP subflows, but we claim that a more holistic approach isneeded to exploit the possibilities offered by multiple flowsto the full extent. This implies the need to take a different
Time [s] G oodpu t [ KB p s ] Subflow 1Subflow 2 (a)
12 14 16 18
Time [s] R TT [ s ] Subflow 1Subflow 2 (b)
Figure 2: Experimentally obtained goodputs and RTT(s)of two MPTCP subflows using non-interfering WiFi accesspaths. The paths and network are shown in Figure 3approach in MPTCP research and in general on how to ex-ploit parallel resources in networking, which can also serveother areas of networking [1, 10, 14, 15].To illustrate one aspect of the problem and to provide mo-tivation for future work, we present several issues in currentimplementation and working of MPTCP due to its relianceon TCP layer information. We show via controlled exper-iments that the default minSRTT scheduler of MPTCP es-sentially forces the protocol to use only one of the manyavailable flows and thus leads to lower performance. Todemonstrate the possible performance that can be achievedby MPTCP, we develop the QueueAware scheduler whichconsiders network interface device queue size while makingscheduling decisions. We evaluate and compare the perfor-mance of QueueAware with minSRTT scheduler in severalrealistic scenarios over WiFi and 4G network interfaces. Wealso sketch how other parameters and network conditionscurrently ignored by MPTCP could be used for improvedefficiency of network communication.The remainder of the paper is organised as follows. InSection 2, we show the gap in performance of the defaultMPTCP scheduler via controlled experiments. In Section 3,we describe the scheduling policy used by QueueAware. Sec-tion 4 describes how we evaluate the efficacy of QueueAwareusing ns-3 simulations. Section 5 quantifies gains in perfor-mance achieved by QueueAware over the default MPTCPscheduler. We discuss related works in Section 6. In Sec-tion 7 we discuss various network parameters and networkconditions that are currently overlooked by MPTCP whenmaking scheduling decisions. These motivate future avenuesfor research. We conclude in Section 8.
2. DOWNSIDES OF IGNORING LOCAL IN-FORMATION
Figure 2(a) exemplifies goodput obtained from controlledtestbed experiments that show how the default MPTCP sched-uler optimizes over two available TCP subflows that use non-interfering end-to-end paths. The network topology used inthe experiment is shown in Figure 3. The topology emulatesa situation where the access network is a bottleneck, but thecore network has high-speed pipes.
AccessPoint 1
Server
Sender
Last-mile Backbone CoreAccessPoint 2
Backbone
Switch
Subflow@AccessPoint1Subflow@AccessPoint2
Figure 3: Topology used in experiments and simulationsNeither flow drops any packets during the length of theexperiment. One would, therefore, expect to fill packets onboth flows and achieve stable goodput on both. However,MPTCP more often than not prefers to send packets on oneflow over the other and is unable to optimize jointly over theavailable paths. As a result, MPTCP is only able to utilize ≈ of available aggregated bandwidth in the experiment.We will argue that the reasons for this are two-fold: (a) deci-sion making by the scheduler only uses the SRTT values ofthe flows, which is an end-to-end feedback mechanism andhence delayed; and (b) this delayed SRTT feedback leadsto the scheduler selecting one flow over the other for unde-sirably long intervals, instead of suitably allocating packetson both flows. These reasons are in fact a consequence of MPTCP being incognizant of cross-layer information that is readily available locally .Interestingly, the scheduling decisions that lead to high de-vice queue occupancy and increase in SRTT were made us-ing values of SRTT that corresponded to an earlier intervalwhen the device queue was lightly loaded. So while a devicequeue (local to the MPTCP sender and used by an MPTCPflow) is full with packets, MPTCP remains oblivious to thesame. Instead, it waits to be informed via a delayed end-to-end SRTT based feedback mechanism about the fact that theflow it has been assigning packets to is in fact loaded. Inthe process, it loses out on many opportunities of schedul-ing packets to the other better flow, one experiencing lowerqueue delay. Further, as the device queue is drop-tail in na-ture, i.e. it starts dropping incoming packets if full, MPTCPhas to retransmit locally dropped packets, the informationof which it infers after a missing ACK. Finally, a rapid dipin SRTT is noticed when a flow, which was earlier heavilyqueued but stayed unused for a while, is assigned new pack-ets. These packets experience smaller waiting times and thushave a small RTT.Next, we describe the QueueAware scheduler for MPTCPwhich is motivated by the observation that MPTCP doesn’thave to rely on delayed increase in SRTT to identify localqueue delay. We show that instead, using the occupancy ofthe device queues together with SRTT enables MPTCP touse all available flows more efficiently. end Buffer Queue 1Queue 2Scheduler Block λ μ μ NIC MPTCP h ost
Servicefacility
Figure 4: Queueing abstraction of an end-to-end
MPTCPconnection with two flows
3. QUEUEAWARE SCHEDULER
The MPTCP scheduler chooses amongst one of many avail-able TCP subflows for each application packet arrival, suchthat the end-to-end throughput is maximized. We consider asimplified queue theoretical abstraction to capture the essen-tials of this problem. Specifically, we model each subflowby a service facility. Figure 4 illustrates the abstraction foran MPTCP end-to-end connection that uses two TCP flows.The abstraction allows us to apply results from analysis ofmulti-queue systems [27].In our queueing abstraction, packets generated by an ap-plication arrive into a queue that models the TCP send buffer(Figure 1). Packets in this queue are assigned to one of theavailable service facilities in a first-come-first-serve (FCFS)manner. Each facility consists of a finite queue and a server.Packets inside a facility are serviced in an FCFS manner.The queue in a service facility resembles the network in-terface queue that is used by the TCP subflow correspondingto it. The server includes the network interface card, the des-tination host (all layers of the TCP/IP stack), and intermittentnodes in the core and the access network used by the flow.
Origins of the QueueAware scheduler:
Many analyticalworks on queueing systems have looked at scheduling cus-tomers/packets to parallel servers [27, 29–31]. Packet ar-rivals are modeled as a random process (for example, Pois-son process) with an arrival rate λ . The time a server takes toservice packets (service time) is modeled as a random vari-able. Each server has a known service rate, which is the in-verse of the expectation of the service time. The two serversin Figure 4 have service rates of µ and µ . For many gen-eral arrival processes and service time distributions, when allservers are stochastically identical, the policy of choosing aservice facility with a minimum number of packets, is knownto be optimal [27, 29, 31], that is it maximizes the number ofpackets serviced in a given amount of time. For the caseof non-identical servers, we were unable to find an optimal policy. However, for the case when the arrivals are Poissonand the service times are non-identical but exponentially dis-tributed, a scheduling policy that assigns a packet to a servicefacility that minimizes the conditional expected waiting timeof the packet, conditioned on the knowledge of the numberof packets waiting for service in the facility, is shown to havegood performance [27]. Our QueueAware scheduler uses thepolicy in an MPTCP setting.Consider K service facilities indexed , . . . , K . Let facil-ity k have a service rate of µ k . Let n k ( t ) be the number ofpackets waiting for service in facility k at time t . The policyassigns a packet to a service facility k ∗ given by k ∗ = arg min k n k ( t ) µ k . (1)Note that /µ k is the expected service time of a packet in fa-cility k . As a result, the conditional waiting time of a packetthat enters such a facility is n k ( t ) µ k , which is the sum of theexpected service times of the n k ( t ) packets currently waitingfor service in the facility. Adapting policy (1) to multiple end-to-end TCP subflows:
The number of packets n k ( t ) in service facility k (corre-sponding to TCP subflow k ) is the number of packets in thecorresponding device queue and can be obtained. However,we must estimate the service rate µ k .Consider the i th packet arrival. Let t si be the time thepacket is assigned to a service facility. Let t ai be the timethat a TCP ACK acknowledges receipt of the packet. TheRTT of the packet can be denoted as RTT i = t ai − t si . Notethat the RTT includes the time this packet waits in the queuein its assigned service facility before it starts service and thetime it spends in service. Let the wait time be W i . Thistime can be calculated locally at the MPTCP sender. Thetime X i that the packet spends in service begins when thepacket enters the NIC for transmission and ends when a TCPACK for the packet is received. Given W i and RTT i , wehave X i = RTT i − W i . The estimate of the service rate isupdated on receipt of a TCP ACK. Let ˆ S k be the current es-timate of the average service time of facility k . On receipt ofa TCP ACK for the packet, we update ˆ S k = α ˆ S k + (1 − α ) X i , (2)where < α < applies appropriate weights to the lastestimate of the average and the most current service time.We use α = 0 . in this work. The corresponding estimate ofthe service rate is / ˆ S k . At time t , QueueAware schedulesto the TCP subflow k ∗ that satisfies k ∗ = arg min k n k ( t ) ˆ S k . (3)
4. EVALUATION METHODOLOGY
We simulated network topologies of the kind shown inFigure 3 using the network simulator ns-3. An MPTCP client For simplicity of exposition we ignore the time a TCP ACK mayhave to wait in a queue before being sent to the TCP layer. ses two TCP subflows to the MPTCP server. The simulatorns-3 has an implementation of MPTCP [8] that we modi-fied to include QueueAware. We compare the performanceof the default minSRTT scheduler of MPTCP with that ofQueueAware.We simulated scenarios where both the MPTCP flows havereliable paths, and where one of the paths is unreliable anddrops TCP packets. We will show that QueueAware, unlikeminSRTT, is able to jointly use available flow capacities toachieve larger per flow (and aggregate) goodputs. Also, itdoes at least as well as the default minSRTT scheduler inscenarios where paths used by TCP subflows are unreliable.
Reliable Paths:
We consider two non-interfering pathsthat see no packet drops but the corresponding access net-works (last mile in Figure 3) have different bottleneck rates.Specifically, we simulated the following network scenarios: • Identical WiFi Access Points:
MPTCP client uses itsWiFi interfaces to connect to identical WiFi access pointsproviding reliable links. This use case may occur in anenterprise WiFi network, where a client may have morethan one good access point in its vicinity. We set theWiFi link rate to each access point to Mbps. • Non-identical WiFi Access Points:
The use case re-mains the same as above. However, one of the twoaccess points has a faster link of Mbps, for exam-ple, because of greater proximity to the MPTCP client. • Heterogeneous networks of WiFi and 4G:
MPTCP clientuses its WiFi interface and G interface to connect toaccess points of the different technologies. The WiFilink rate is Mbps and G link rate is Mbps. Thoughthe link rate of G is large, packets experience largerRTT over G [5].
Unreliable Paths:
We simulated an MPTCP client thatuses its two WiFi interfaces to connect to access points. How-ever, the tcp subflow using one of the access points suffersTCP packet errors. We set packet error rate to − . In areal setting, this could happen because the client is close tolosing coverage from the access point or because the accesspoint is heavily loaded.Lastly, for a selection of paths and network technologies,we simulated short time MPTCP subflows by performing asmall file upload of MB.For all simulations, the backbone link was modeled as aEthernet with rate Mbps, and the core network was mod-eled as a Mbps link. Our choice of links within the ac-cess networks makes them the bottleneck. The application atthe MPTCP client generated a load of Mbps. All resultswere averaged over 10 simulation runs. The access networks,backbone, and the core see no traffic other than that createdby our MPTCP client. We defer performance evaluation ofQueueAware under more realistic loads and larger numbersof MPTCP clients to future work.
5. RESULTS5.1 Reliable Paths
We consider the scenario in which both TCP subflows usereliable paths. Figure 5(a) compares subflow goodputs forwhen there are two identical WiFi access points. Observethat both QueueAware and minSRTT show similar behaviorin total goodputs per flow. However, a larger goodput perflow is achieved by QueueAware. As a result, it achieves a increase in aggregate goodput over minSRTT. This isbecause minSRTT, as a consequence of only using the de-layed feedback provided by SRTT, ends up scheduling pack-ets to a subflow, which has a high device queue occupancy,for a longer time. Next, we shed more light on the differ-ences in the behaviors of the schedulers, in Figures 6(a)-8(b).Figures 6(a) and 6(b) show the goodputs on the two flowsobtained respectively by QueueAware and minSRTT as afunction of time. While QueueAware can maintain a sta-ble and almost equal goodput over both the flows, minSRTTat any given time chooses one flow over the other. This be-havior of goodputs is made clear by Figures 7(a) and 7(b)that show SRTT behavior in same time duration. Subflowsexperience high SRTT for longer stretches of time when us-ing the minSRTT scheduler, compared to when using theQueueAware scheduler. This corresponds to longer stretchesof full queue occupancy of packets when using min-SRTT (see Figure 8(b)), when compared to the QueueAwarescheduler (see Figure 8(a)).Figure 5(b) shows the goodput performance when sub-flows use non-identical WiFi access points. It can be ob-served that QueueAware and minSRTT both perform simi-larly for the subflow that uses the faster link of Mbps.However, importantly, QueueAware makes good use of theother available Mbps link. Under QueueAware, the sub-flow using the slower link gets about of the goodputobtained by the subflow using the faster link. On the otherhand, minSRTT gets hardly any goodput on the subflow us-ing the slower link. The use of locally available cross-layerdevice queue occupancy information together with SRTT en-ables QueueAware to make good use of both the availableWiFi interfaces.Finally, Figure 5(c) shows the goodput performance whenone of the subflows uses a WiFi interface and the other usesa G interface that has twice the WiFi link rate. Recall thatthe G network suffers from larger RTT. Even in this casewe observe that QueueAware utilizes bandwidth offered byboth network interfaces considerably better than minSRTT.QueueAware achieves more aggregate goodput whencompared to minSRTT. The scheduler achieves a goodputthat is . × that of minSRTT via G. The underutilizationof G by minSRTT has earlier been observed in real-worlddeployment tests. It has been found that the default schedulerutilizes the WiFi sub-flow of time [6]. ubflow 1 Subflow 2020004000600080001000012000 G oodpu t [ KB p s ] QueueAwareminSRTT (a) Identical WiFi access points
Subflow 1 Subflow 2020004000600080001000012000 G oodpu t [ KB p s ] QueueAwareminSRTT (b) Non-identical WiFi access points
WiFi 4G02000400060008000 G oodpu t [ KB p s ] QueueAwareminSRTT (c) WiFi and G. Subflow 1 Subflow 2010002000300040005000 G oodpu t [ KB p s ] QueueAwareminSRTT (d) One unreliable path
Figure 5: Goodputs of MPTCP flows when using QueueAware and minSRTT, for the scenarios described in Section 4
Simulation Time [s] G oodpu t [ KB p s ] Subflow 1Subflow 2 (a) QueueAware
Simulation Time [s] G oodpu t [ KB p s ] Subflow 1Subflow 2 (b) minSRTT
Figure 6: Comparison of goodputs of subflows obtained byQueueAware and minSRTT
Simulation Time [s] S R TT [ s ] Subflow 1Subflow 2 (a) QueueAware
Simulation Time [s] S R TT [ s ] Subflow 1Subflow 2 (b) minSRTT
Figure 7: Comparison of SRTT of subflows obtained byQueueAware and minSRTT
Simulation Time [s] Q ueue S i z e [ pa ck e t s ] Subflow 1Subflow 2 (a) QueueAware
Simulation Time [s] Q ueue S i z e [ pa ck e t s ] Subflow 1Subflow 2 (b) minSRTT
Figure 8: Comparison of queue lengths of subflows obtainedby QueueAware and minSRTT
On detecting packet loss, the congestion window of a sub-flow initiates the congestion avoidance phase. This limits the minSRTT QueueAwareScenario Path Reliability Completion Time (s)
Two WiFi, each rate Mbps Reliable paths 1.456
Two WiFi, each rate Mbps Errors on one path 2.527
WiFi and G No Tx errors 2.439
Table 1: Small file upload completion timenumber of packets that can be sent on the sub-flow. Further-more, MPTCP employs
Penalization and Retransmission (PR) ,where on packet error MPTCP reduces the congestion win-dow of the sub-flow with high RTT and reinjects the lostpacket on the other available subflow [32]. Though this tech-nique reduces the possibility of Head-of-Line (HoL) block-ing, it also limits the sending rate and significantly impactsoverall goodput.Figure 5(d) shows the goodputs achieved by the two sub-flows when one of the subflows (subflow in figure) expe-riences a packet loss rate of about − . Due to the abovestated reason both schedulers achieve rather small goodputson the subflow experiencing errors. However, QueueAwareis able to better exploit the subflow that has a reliable path.On this path, it achieves a goodput that is an improvement ofabout over the goodput achieved by minSRTT. Table 1 shows the upload completion time of a MB filefor QueueAware and minSRTT for different interfaces anddifferent path reliability. QueueAware achieves about decrease in upload time with respect to minSRTT.
6. RELATED WORK
Several researchers have proposed improvements to thedefault minSRTT scheduler of MPTCP. Paasch et al. [20]designed a modular scheduler framework for MPTCP andhad compared the performance of default minSRTT sched-uler with a Round-Robin scheduler. Baidya et al. [2] proposeRTT-based path quality metric to adapt out-of-order trans-missions while limiting the usage of a slower path. Kuhnet al. [16] aim to reduce overall application delay by esti-mating maximum allowed receiver buffer blocking time totransmit out-of-order packets on multiple paths. On the otherhand, Hwang et al. [12] propose to freeze the utilization of ost Machine Network Receiver
Data rate Channel utilization Bandwidth utilizationRetry percentage Path congestion Receiver queue delaysNetwork interface type Number of nodes on path Congestion window size
Table 2: Network parameters impacting RTTslower path when the difference in RTT’s of faster and slowerpaths exceeds a calculated threshold. BLEST [9] and OTIAS[33] balance heterogeneous flows and reduce Head-of-Lineblocking by considering several parameters such as CWND,in-flight packets etc., along with SRTT. CMT-RMDS [4] pro-poses adopting receiver-centric path characteristics along withsender-driven RTT values to better estimate current path con-ditions.Researchers have also proposed to utilize other networkparameters to provide better path estimation. Corbillon et al.[7] leverage application layer information in transport layerflow scheduling decisions to provide delay-resilient videostreaming in MPTCP. Lim et al. [28] labels WiFi subflowas active/inactive for data transmission based on a minimumdesired signal strength. F2P-DPS [17] proposes to combineseveral TCP parameters such as CWND, SSThresh, RTTsto estimate subflow weights for data data transmissions. Niet al. [18] utilizes reverse-path SACK packets to inform thesender of any out-of-order/lost packets at receiver buffer andcalculate offset to provide successful data chunk delivery.Although the solutions mentioned above tackle several crit-ical issues affecting Multipath TCP in real-world, these tech-niques are significantly dependent on accurate estimation ofcurrent path characteristics. The solutions which utilize SRTTvalue as current path performance metric or are dependentupon it suffer from same issues as that of minSRTT shownin the paper. To efficiently handle varying application datatraffic over heterogeneous paths in MPTCP, an ideal sched-uler must be able to schedule packets over flows proactivelyand must adapt to network conditions swiftly.
7. DISCUSSION AND FUTURE WORK
The aim of our evaluation was to show that RTT alone isnot the best measure for scheduling packets over multiplenetwork interfaces in MPTCP. Sender device queue is onesuch parameter which needs to be monitored and consideredin overall network scheduling decision. Only by includingcross-layer feedback into MPTCP control loop can we fullyexploit the potential offered by multiple connections. Asshown in the paper, treating the links as independent, par-allel TCP flows restrict MPTCP’s performance.Several other conditions can lead to changes MPTCP’s ef-ficiency. Table 2 lists few such variables that impact RTTand the network path and which could be exploited by amore holistic MPTCP. These include host-specific parame-ters, such as device queue length; network-specific aspects,such as congestion, and receiver-specific parameters, such aswindow size. An efficient MPTCP should look at all or many of these parameters in conjunction while deciding how toschedule the packets and allocate the available flows for datatransport.Several other factors can also significantly impact networkbehavior. Congestion at an access point, over-utilization ofchannel capacity, queueing delay at receiver are some suchscenarios. However, unlike the locally accessible parameters(such as device queue occupancy), explicitly monitoring andpredicting these parameters is an interesting research ques-tion of its own. One possibility is to use explicit notificationssuch as ECN to convey occurrence of such a scenario. Incor-porating these mechanisms and comparing its performanceimpact on current MPTCP implementation would be an in-teresting future avenue for research in the wider community.In this paper, we argue that the current way of treatingthe individual subflows, practically separately relying onlyon their TCP level information, is not sufficient for fully ex-ploiting the potential offered by multiple connections, and amore holistic approach to MPTCP and how it manages theconnections is needed. Our results in this paper illustrateonly one small facet of the bigger problem, but broader re-search efforts are needed to understand better how such mul-tiple connections should be used in modern networks. Thesecover questions such as fairness to other network flows, gen-eral performance issues, flow control, and security issues,but they all apparently require a different approach to theproblem than parallel, but largely independent connections.
8. CONCLUSION
In this paper, we demonstrated the shortcomings of MPTCPscheduling algorithm and the inadequacy of its reliance onSRTT. We proposed the QueueAware scheduler, which ex-ploits device driver queue occupancy together with sRTTto obtain significantly better performance than the defaultMPTCP scheduler. We believe that the QueueAware sched-uler highlights the need for a more holistic approach to mul-tipath scheduling than currently being done by MPTCP.
9. REFERENCES [1] Cisco visual networking index: Global mobile data traffic forecastupdate 2016 to 2021. In
CISCO white paper .[2] S. H. Baidya and R. Prakash. Improving the performance ofmultipath tcp over heterogeneous paths using slow path adaptation. In ,pages 3222–3227, June 2014.[3] O. Bonaventure, M. Handley, and C. Raiciu. An overview ofmultipath tcp. ; login: , 37(5):17, 2012.[4] Y. Cao, Q. Liu, G. Luo, and M. Huang. Receiver-driven multipathdata scheduling strategy for in-order arriving in sctp-basedheterogeneous wireless networks. In , pages 1835–1839, Aug 2015.[5] Y.-C. Chen, Y.-s. Lim, R. J. Gibbens, E. M. Nahum, R. Khalili, andD. Towsley. A measurement-based study of multipath tcpperformance over wireless networks. In
Proceedings of the 2013Conference on Internet Measurement Conference , IMC ’13, pages455–468, New York, NY, USA, 2013. ACM.[6] Q. D. Coninck, M. Baerts, B. Hesmans, and O. Bonaventure.Observing real smartphone applications over multipath tcp.
IEEECommunications Magazine , 54(3):88–93, March 2016.7] X. Corbillon, R. Aparicio-Pardo, N. Kuhn, G. Texier, and G. Simon.Cross-layer scheduler for video streaming over mptcp. In
Proceedingsof the 7th International Conference on Multimedia Systems , MMSys’16, pages 7:1–7:12, New York, NY, USA, 2016. ACM.[8] M. Coudron and S. Secci. An implementation of multipath tcp in ns3.
Computer Networks , 116:1 – 11, 2017.[9] S. Ferlin, Ã. Alay, O. Mehani, and R. Boreli. Blest: Blockingestimation-based mptcp scheduler for heterogeneous networks. In , pages 431–439, May 2016.[10] N. Fernando, S. W. Loke, and W. Rahayu. Mobile cloud computing:A survey.
Future Generation Computer Systems , 29(1):84 – 106,2013. Including Special section: AIRCC-NetCoM 2009 and Specialsection: Clouds and Service-Oriented Architectures.[11] S. Floyd and V. Jacobson. Random early detection gateways forcongestion avoidance.
IEEE/ACM Transactions on Networking(ToN) , 1(4):397–413, 1993.[12] J. Hwang and J. Yoo. Packet scheduling for multipath tcp. In , pages 177–179, July 2015.[13] A. Inc. Use Multipath TCP to create backup connections for iOS. https://support.apple.com/en-us/HT201373 , 2017.[14] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H.Briggs, and R. L. Braynard. Networking named content. In
Proceedings of the 5th international conference on Emergingnetworking experiments and technologies , pages 1–12. ACM, 2009.[15] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken. Thenature of data center traffic: Measurements & analysis. In
Proceedings of the 9th ACM SIGCOMM Conference on InternetMeasurement , IMC ’09, pages 202–208, New York, NY, USA, 2009.ACM.[16] N. Kuhn, E. Lochin, A. Mifdaoui, G. Sarwar, O. Mehani, andR. Boreli. Daps: Intelligent delay-aware packet scheduling formultipath transport. In , pages 1222–1227, June 2014.[17] D. Ni, K. Xue, P. Hong, and S. Shen. Fine-grained forward predictionbased dynamic packet scheduling mechanism for multipath tcp inlossy networks. In , pages 1–7, Aug 2014.[18] D. Ni, K. Xue, P. Hong, H. Zhang, and H. Lu. Ocps: Offsetcompensation based packet scheduling mechanism for multipath tcp.In ,pages 6187–6192, June 2015.[19] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure. Experimentalevaluation of multipath tcp schedulers. In
Proceedings of the 2014ACM SIGCOMM Workshop on Capacity Sharing Workshop , CSWS’14, pages 27–32, New York, NY, USA, 2014. ACM.[20] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure. Experimentalevaluation of multipath tcp schedulers. In
Proceedings of the 2014ACM SIGCOMM Workshop on Capacity Sharing Workshop , CSWS’14, pages 27–32, New York, NY, USA, 2014. ACM.[21] C. Paasch and B. Sebastian. Multipath TCP in the Linux Kernel. , 2017.[22] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling tcpthroughput: A simple model and its empirical validation.
SIGCOMMComput. Commun. Rev. , 28(4):303–314, Oct. 1998.[23] J. Padhye, J. Kurose, D. Towsley, and R. Koodli. A model basedtcp-friendly rate control protocol. In
Proceedings of NOSSDAV , 1999.[24] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D. Wischik, andM. Handley. Improving datacenter performance and robustness withmultipath tcp. In
Proceedings of the ACM SIGCOMM 2011Conference , SIGCOMM ’11, pages 266–277, New York, NY, USA,2011. ACM.[25] C. Raiciu, D. Niculescu, M. Bagnulo, and M. J. Handley.Opportunistic mobility with multipath tcp. In
Proceedings of theSixth International Workshop on MobiArch , MobiArch ’11, pages7–12, New York, NY, USA, 2011. ACM.[26] C. Raiciu, C. Paasch, S. Barre, A. Ford, M. Honda, F. Duchene,O. Bonaventure, and M. Handley. How hard can it be? designing andimplementing a deployable multipath tcp. In
Proceedings of the 9thUSENIX conference on Networked Systems Design andImplementation , pages 29–29. USENIX Association, 2012. [27] Z. Rosberg and P. Kermani. Customer routing to different serverswith complete information.
Advances in Applied Probability ,21(4):861–882, 1989.[28] Y. s. Lim, Y. C. Chen, E. M. Nahum, D. Towsley, and K. W. Lee.Cross-layer path management in multi-path transport protocol formobile devices. In
IEEE INFOCOM 2014 - IEEE Conference onComputer Communications , pages 1815–1823, April 2014.[29] R. R. Weber. On the optimal assignment of customers to parallelservers.
Journal of Applied Probability , 15(2), 1978.[30] W. Whitt. Deciding which queue to join: Some counterexamples.
Oper. Res. , 34(1):55–62, Jan. 1986.[31] W. Winston. Optimality of the shortest line discipline.
Journal ofApplied Probability , 14(1):181–189, 1977.[32] M. working group. Use Cases and Operational Experience withMultipath TCP. https://tools.ietf.org/html/draft-ietf-mptcp-experience-07 ,2016.[33] F. Yang, Q. Wang, and P. D. Amer. Out-of-order transmission forin-order arrival scheduling for multipath tcp. In2014 28thInternational Conference on Advanced Information Networking andApplications Workshops