Online Caching and Coding at the WiFi Edge: Gains and Tradeoffs
Lalhruaizela Chhangte, Emanuele Viterbo, D Manjunath, Nikhil Karamchandani
aa r X i v : . [ c s . N I] J a n Online Caching and Coding at the WiFi Edge:Gains and Tradeoffs
Lalhruaizela Chhangte
IITB-Monash Research AcademyMumbai, [email protected]
Emanuele Viterbo
Monash UniversityClayton, VIC, [email protected]
D Manjunath
IIT BombayMumbai, [email protected]
Nikhil Karamchandani
IIT BombayMumbai, [email protected]
Abstract —Video content delivery at the wireless edge continuesto be challenged by insufficient bandwidth and highly dynamicuser behavior which affects both effective throughput and latency.Caching at the network edge and coded transmissions have beenfound to improve user performance of video content delivery. Thecache at the wireless edge stations (BSs, APs) and at the users’end devices can be populated by pre-caching content or by usingonline caching policies. In this paper, we propose a system wherecontent is cached at the user of a WiFi network via online cachingpolicies, and coded delivery is employed by the WiFi AP to deliverthe requested content to the user population. The content of thecache at the user serves as side information for index coding.We also propose the LFU-Index cache replacement policy at theuser that demonstrably improves index coding opportunities atthe WiFi AP for the proposed system. Through an extensivesimulation study, we determine the gains achieved by cachingand index by coding. Next, we analyze the tradeoffs betweenthem in terms of data transmitted, latency, and throughput fordifferent content request behaviors from the users. We also showthat the proposed cache replacement policy performs better thantraditional cache replacement policies like LRU and LFU.
I. I
NTRODUCTION
Video constitutes more than 70% of the total IP traffic today,and is expected to reach 82% by 2020 [1]. Video contentdelivery at the wireless edge continues to be challenged byinsufficient bandwidth and highly dynamic user behavior. Thischallenge will be further exacerbated because mobile andwireless devices are expected to account for two-thirds ofthe total IP traffic by 2020 [1]. Two mechanisms that canbe used to improve user performance of video delivery are:(i) caching of video content at the wireless edge nodes—base stations (BSs), access points (APs), users and (ii) codeddelivery techniques that multicast, or broadcast, a single streamto serve multiple users simultaneously.Caching of content at the network edge has been foundto be useful to improve content delivery by several recentinvestigations [2]–[6]. However, most of these studies con-sidered pre-caching of contents based on the predicted users’demand profile [2], [3], [7], [8]. On the other hand, only afew studies considered online caching, where the contents tobe cached are decided by cache replacement policies afterthey are requested [7], [9]. Online caching is expected to beincreasingly important because popularity profiles of contentis expected to change on a faster timescale. A second mechanism that can help improve user perfor-mance of video content delivery is coded delivery. Codeddelivery techniques [10]–[12] are information-theoretic ap-proaches that use side information cached at the client sideso that the server can broadcast index-coded informationsimultaneously to multiple clients. This broadcast informationis simultaneously received by multiple clients who use the sideinformation in their local caches to decode the informationthat they need. The side information, which are the contentscached at the users can be pre-cached [10], [11] or updatedvia online methods [9], [13]. Also, the works on online codeddelivery techniques considered the long-term (weeks, months)gains for evaluating the performance of coding. However, userbehavior in a wireless network is highly dynamic, and theclients connected to a single wireless edge station (BS, AP)can change on faster timescales.Our interest in this work is to assess how frequently indexcoding opportunities arise in a WiFi network over shorttimescales and to estimate the advantages of caching andindex coding on throughput and latency as a function ofcontent requests behavior from the users. The following arethe specific contributions of this paper. • From our experience of building the Wi-Cache system[14], [15], we propose a system model to exploit possiblegains of online caching and index coding. • For this system, we propose the LFU-Index cache re-placement policy at the user and a coding heuristic at thewireless edge station, e.g., WiFi AP. • From an extensive simulation study, we determine gainsand tradeoffs from caching and index coding using met-rics like data transmitted, latency and throughput.The rest of the paper is organized as follows. In thenext section we present an index coding problem instance.Section III describes a general video content distributionsystem. Section IV contains the detailed proposed systemmodel. Section V describes the simulation settings while thekey results are provided in Section VI.II. I
NDEX CODING
Fig. 1 shows an index coding problem instance with fivewireless clients and a server. The server has the set of files Throughout the paper, we use client and user interchangeably to denoteuser terminal. roadcast: F (cid:1) F , F (cid:0) F , F Wants: F Has: F Wants: F Has: F Wants: F Has: F Wants: F Has: F Wants: F Has: F Fig. 1. An index-coding problem instance
Video streaming serverWired linkWireless linkClientsCache Wireless edge station
Fig. 2. Video content distribution system { F , . . . , F } , and each client i wants file F i from the server,denoted by the set W ants . Further, each client has cached acopy of the files previously received from the server, denotedby the set
Has.
When the sets
W ants and
Has of each clientis revealed to the server, the server constructs a set of codedfiles, and broadcasts the coded stream that is received by allthe clients. Each client then decodes the coded file to retrievethe file that it wants. Coding and decoding is by simple
XOR operations. In the example of Fig. 1, after receiving the sets
W ants and
Has of each of the clients, the server broadcasts: F L F , F L F , F . Upon receiving the broadcast, clients 1and 2 can retrieve F and F from F L F by performing XOR with F and F respectively. Similarly, clients 3 and 4can retrieve F and F from F L F respectively, and client5 can receive F as it is. Therefore, instead of transmitting fivedifferent files, the server transmits only 3 files thus reducingthe number of bits transmitted over the broadcast channel.III. V IDEO CONTENT DISTRIBUTION SYSTEM
We consider a system consisting of a wireless edge station,which could be a WiFi access point (AP) or a cellular basestation (BS) connected to a video streaming server througha wired link as shown in Fig. 2. The video streaming serverstores N video files denoted by F := { f , . . . , f N } , whichare split into smaller video segments of fixed size playbackduration P . K clients denoted by C := { c ,. . . , c K } areconnected to the wireless edge station through the wirelesslink of bandwidth B w . Each client is equipped with a cacheof size M bits to cache the video segments requested fromthe server. IV. S YSTEM MODEL
We propose the following system model to study the per-formance of caching and index coding, as well as the tradeoffsbetween them. This model is based on the Wi-Cache wirelessedge caching system that we have developed. The system architecture and the implementation details are available in[14], [15].
A. File popularity model
Let P ( i ) := { P ( i ) f , P ( i ) f , . . . , P ( i ) f N } be the file popularitydistribution of the video files f , f , . . . , f N of a client c i . Inthe proposed model, initially we assign the same file popularitydistribution for all the clients, which is then updated by eachclient based on its own request. Initial distribution:
We consider MZipf distribution tomodel the initial file popularity distribution for each client,an assumption supported by a recent measurement study onlarge-scale content distribution systems [16]. Based on MZipf,the probability that users want to request a video file f i , i.e.,request probability, is denoted by P f i = ( i + q ) − γ P Nj =1 ( j + q ) − γ , i = 1,2,. . . , N where N is the total number of video files in the server, γ isthe Zipf factor, and q is the plateau factor. Update:
A client c i updates its own distribution every timeit requests a file based on a rewatch factor, denoted by α ,with α ∈ [0 , . If a client c i request a video file f j , the newprobabilities are calculated using the following P ( i ) f k = P ( i ) f k + P ( i ) f k − P ( i ) f j · ( P ( i ) f j − α · P ( i ) f j ) , ∀ f k ∈ F \ { f j } P ( i ) f j = α · P ( i ) f j That is, the request probability of the video file f j is scaledby α , and the reduction in probability of the file f j , ( P ( i ) f j − α · P ( i ) f j ) , is distributed across the other files proportionally totheir current request probabilities. Special cases: • α = 0 : In this case, the probabilities of the requested filesare updated to , and therefore the clients do not rewatchvideo files. • α = 1 : In this case, there is no change in the requestprobabilities, and therefore the clients maintain the sameinitial distribution independent of their requests. B. Video streaming and service model
Streaming a video file f i which is split into S numberof segments involves requesting the set of video segments f i , f i , . . . , f Si in sequential order, and one segment at a time.We consider that when a client initially joins the network, orfinished streaming a video file, it waits for a random amountof time, T w , before requesting a video file. We model T w asan exponential random variable with mean \ λ S .We consider a system where the clients send their cachecontent information, i.e., the set of video segments presentin their caches to the wireless edge station node every timethey establish a new connection. Also, we require that thewireless edge station node keep track of the cache contentnformation of a client as long as the client is connected toit. Once a client is connected to the wireless edge station,it selects a file based on the file popularity model describedin Sec. IV-A, and streams the video file. In order to handlethe video segment requests that arrive asynchronously at thewireless edge station, we consider a transmission buffer B t ,and a request queue Q r as illustrated in Fig. 3. ...... Transmission Bu (cid:2) er(Bt) Request Queue(Qr) Segment requestMulticast
Fig. 3. Service model
When a segment request r i from a client c i arrives, it isplaced at the end of Q r . The video segment currently beingtransmitted is in B t . As soon as the current transmission isfinished, the video segment request at the front of Q r is placedin B t , and the transmission of the video segment begins. In thesystem model, we consider that the transmission is a multicast,and the clients process the multicast segments which are onlydesignated for them. The multicast segments can be eitherindex-coded or non-index-coded.In order to serve the clients’ requests, the wireless edgestation fetches the content of the requested video segmentsfrom the video streaming server. Meanwhile, it also looksfor index coding opportunities among the received segmentrequests. Index coding model:
When index coding is enabled at thewireless edge station, for every segment request that arrives,the wireless edge station checks whether it can be index-coded with any of the segment requests currently in Q r . For asegment request r i from client c i , we define H ( r i ) and W ( r i ) ,where H ( r i ) is the set of video segments currently cached by c i , and W ( r i ) is the set of video segments requested by c i .Requests r i from c i , and r j from c j can be index-coded if W ( r i ) ⊂ H ( r j ) and W ( r j ) ⊂ H ( r i ) and we represent the index-coded segment request as r i,j .For the index-coded segment request r i,j which consists ofrequests from c i and c j , H ( r i,j ) = H ( r i ) ∩ H ( r j ) , and W ( r i,j ) = W ( r i ) ∪ W ( r j ) . Given { W ( r i ) , H ( r i ) } i =1 , ··· ,K , finding anoptimal strategy that maximizes the index coding is NP-hard[17]; luckily many low complexity heuristic algorithms areknown to perform well [17].We also propose a heuristic to maximize the index codingperformed at the wireless edge station. Algorithm 1 uses theproposed heuristic. In order to describe the heuristic, we firstdefine the degree of freedom (DOF) [12] and degreee of effort(DOE). Let us consider that a request r j from c j currentlyresides in Q r , we call | H ( r j ) | the DOF of r j , and | W ( r j ) | the DOE of r j . DOF represents the potential number of othersegment requests which can be served by the cache contentof c j through index coding. DOE represents the number ofsegments that have to be present in the cache of any otherclient to form index coded segment with r j . The heuristic employed by Algorithm 1 minimizes the loss of DOF, ifthere are multiple index coding opportunities for an incomingrequest, and it minimizes the propagation of DOE, if the lossof DOF is the same. Algorithm 1:
Index coding r i : segment request just arrived; Q r : Request queue; sel req : selected request; sel DOF : DOF of selected request; sel DOE : DOE of selected request; cur DOF = 0; cur DOE = ∞ ; while end of Q r is not reached do pick a segment request r j from the front of Q r ; if ( W ( r i ) ⊂ H ( r j )) and ( W ( r j ) ⊂ H ( r i ) ) then if ( DOF ( r i,j ) > cur DOF ) then sel req = r j ; sel DOF = | H ( r i,j ) | ; sel DOE = | W ( r i,j ) | ; else if ( DOF ( r i,j ) == cur DOF ) then if ( DOE ( r i,j ) < cur DOE ) then sel req = r j ; sel DOF = | H ( r i,j ) | ; sel DOE = | W ( r i,j ) | ; end if ( sel DOF > ) then r sel req,r i = Code sel req and r i ; update DOF and
DOE of r sel req,r i else put r i at the end of Q r end C. Caching model and cache replacement policies
We consider an online caching model at the clients, i.e., aclient caches the video segment requested if there is enoughspace in its cache. If there is not enough space in its cache, itevicts one or more segments from its cache to accommodatethe new segment based on a certain cache replacement policy.Moreover, when segment requests are found in the cache, i.e.,there is a cache hit, the requests are served from the cachedirectly.Traditional cache replacement policies such as least recentlyused (LRU) [18] and least frequently used (LFU) [19] use sim-ple heuristics to improve the cache hit count, i.e., the numberof segment requests which are found in the cache. We considerthe following model to describe the cache replacement policiesused in the system model—LFU, LRU, Belady, LFU-Index.Let S i := { s , s , .. } be the set of all the segments presentin the cache of a client c i , and for each segment s j present inthe cache, we define a 6-tuple representing the segment < s j , β ls j , Γ ls j , β gs j , Γ gs j , τ s j >, where s j is the segment name, β ls j is the last time s j wasrequested by c i , Γ ls j is the request count of s j by c i , β gs j isthe last time s j was requested by any client in C , Γ gs j is thesum of the request count of s j across all the clients in C , and τ s j is the number of clients which cache s j . ) LRU: LRU evicts the segment with the least β gs j toaccommodate the new segment.
2) LFU:
LFU evicts the segment with the least Γ gs j toaccommodate the new segment.
3) Belady:
Belady [20] is the optimal cache replacementpolicy in terms of cache hit count. It chooses to evict a segmentin the cache which will be requested furthest in the future.However, it requires knowledge of the future and is thus notachievable in practice.We also propose LFU-Index, a cache replacement policywhich extends LFU by improving the index coding opportu-nities in addition to the cache hit count. In order for indexcoding to be possible between two clients, the request ofeach client has to be present in the cache of the other client.Therefore, index coding opportunities arise when the cachecontent of the clients in the network are different, and theclients request different contents. The heuristic employed byLFU-Index improves index coding opportunities by maximiz-ing the difference in the content of caches across the clients,which is shown in Algorithm 2. From the set of segments withminimum request count, i.e., Γ g = min Γ g , a client evictsthe segment which is cached the most by the other clients,i.e., τ = max τ . Also, if there are multiple segments with τ = max τ , the segment with the least β l is evicted. Algorithm 2:
LFU-index s : new segment just received by client c i ; min Γ g : initialize with minimum Γ gs j from S i ; max τ = 0; min β l = ∞ ; p s : place holder for segment to be evicted; for each segment < s j , β ls j , Γ ls j , β gs j , Γ gs j , τ s j > in S i do if Γ gs j == min Γ g then if τ s j > max τ then p s = s j ; max τ = τ s j ; else if τ s j == max τ then if β ls j < min β l then p s = s j ; min β l = β ls j ; end Evict p s from S i ; Add s to S i V. S
IMULATION SETTINGS
We create a discrete-event simulator, written in Python toassess the performance of caching and coding at the wirelessedge station. We consider a WiFi network consisting of anaccess point (AP), 10 wireless clients and a video contentserver consisting of 100 video files. An 802.11n AP is con-sidered, and the data link multicast rate is set at 24 Mbps.Also, we consider that there is no delay in fetching a videosegment from the cache. The video files used in the simulationare short video clips of length 2-5 minutes. The segments areencoded to a bitrate of 5 Mbps, 1280x720 resolution, and 4 F r a c t i o n o f s e g m e n t s i z e Fig. 4. Segment size distribution
20 40 60 80 100File number0.000.020.040.060.080.100.120.14 F il e r e q u e s t p r o b a b ili t y Fig. 5. Initial file popularity distribution seconds segment playback duration using MP4Box [21] DASHencoder. The segment size distribution is shown in Fig. 4,where the histogram shows the fraction of different segmentsizes, and the red line shows the trend. We use the MZipf filepopularity distribution described in Sec. IV-A with parameters q = 10 and γ = 2 . , as shown in Fig. 5 for the intial filepopularity distribution. The 20 most popular files constituteapproximately 80% of the file request probability. We considera setting where the clients are actively requesting video filesfor 3 hours with the mean waiting time, \ λ S = 5 seconds.In order to perform the simulations, first, we generate therequest profiles for different values of M = 0 . , . , . and α = 0 , . , . , . , . , where the values of M arerepresented in terms of the fraction of the total file sizepresent at the video streaming server. A request profile is asequence of user and file request pairs, which indicate theorder in which the clients request the video files. Using therequest profiles generated, we run simulations of the proposedsystem described in Sec. IV for the cases when index codingis enabled, and when index coding is disabled.VI. P ERFORMANCE METRICS AND R ESULTS
We consider the following metrics to determine the perfor-mance of caching and index coding, as well as the tradeoffsbetween them.We define the gain due to caching only , G c , as the ratio ofthe data transmitted in bits when caching is not employed at .0 0.25 0.5 0.75 1.0 Rewatch factor (α) G a i n LRULFULFU-INDEXBELADY With index codingWithout index coding (a) M = 0 . Rewatch factor (α) G a i n LRULFULFU-INDEXBELADY With index codingWithout index coding (b) M = 0 . Rewatch factor (α) G a i n LRULFULFU-INDEXBELADY With index codingWithout index coding (c) M = 0 . Fig. 6. Gain as a function of rewatch factor ( α ) for different values of cache size ( M ) Rewatch factor (α) L a t e n c y p e r M B ( s e c o n d s ) LRULFULFU-INDEXBELADY With index codingWithout index coding (a) M = 0 . Rewatch factor (α) L a t e n c y p e r M B ( s e c o n d s ) LRULFULFU-INDEXBELADY With index codingWithout index coding (b) M = 0 . Rewatch factor (α) L a t e n c y p e r M B ( s e c o n d s ) LRULFULFU-INDEXBELADY With index codingWithout index coding (c) M = 0 . Fig. 7. Average perceived latency per MB as a function of rewatch fator ( α ) for different values of cache size ( M ) the client side to the data transmitted in bits when caching isemployed at the client side, i.e., G c = TX bits with no cacheTX bits with cacheWe define the gain due to index coding only , G i , as the ratioof the data transmitted in bits when only caching is employedat the client side to the data transmitted in bits when cachingand index coding are employed at the client side, i.e., G i = TX bits with cache and no index codingTX bits with cache and index codingWe define the gain due to caching and index coding, G c,i ,as the ratio of the data transmitted in bits when no cachingand index coding are employed to the data transmitted in bitswhen caching and index coding are employed at the clientside, i.e., G c,i = TX bits no cache and no index codingTX bits cache and index coding = G c × G i We define the average perceived latency per MB , L s , asthe average time elapsed between when a segment of size 1megabyte (MB) is requested by a client, and when the segmentis available for playback at the client. The perceived latencyper MB is calculated for all the segments, i.e., segments whichare obtained from the network and the cache.We define the average perceived throughput , T s , as theaverage number of bits received per second from the wireless edge station for every segment request made by a client. Theperceived throughput is calculated only for the segments whichare obtained from the network.Figs. 6, 7, and 8 show the performance of the proposedsystem in terms of gain, latency, and throughput for differentvalues of rewatch factor ( α ) and cache size ( M ) respectively.From these figures, we can see that as α increases, there is animprovement in gain, latency, and throughput for all the cachereplacement policies described in Sec. IV-C. The improvementin performance is due to rewatching of videos, which increasesas α increases.Now, consider the case where α = 0 , i.e., when the clientsdo not rewatch videos. The improvement in performance isonly due to index coding, and it also increases as the cachesize ( M ) at the client increases. This is due to the increase inindex coding opportunities as M increases.Moreover, for any α , the performance due to both cachingand index coding increases with increase in M due to theincrease in index coding and cache hit as the number ofvideo segments stored at the cache increases. However, theimprovement in gains due to caching is slightly higher thanthe improvement in index coding gains with increase in M .Further, from the figures, we can observe that the proposedLFU-Index cache replacement policy performs better than thetraditional policies—LRU and LFU, and it is closer to Belady,which is the optimal cache replacement policy for a given .0 0.25 0.5 0.75 1.0 Rewatch factor (α) T h r o u g h p u ) ( M bp ( ) LRULFULFU-INDEXBELADY With index codingWithout index coding (a) M = 0 . Rewatch factor (α) T h r o u g h p u ) ( M bp ( ) LRULFULFU-INDEXBELADY With index codingWithout index coding (b) M = 0 . Rewatch factor (α) T h r o u g h p u ) ( M bp ( ) LRULFULFU-INDEXBELADY With index codingWithout index coding (c) M = 0 . Fig. 8. Average perceived network throughput as a function of rewatch factor ( α ) for different values of cache size ( M ) request profile. The gain in performance by LFU-Index is dueto the improvement in index coding opportunities in additionto the cache hit count. In general, index coding opportunitiesarise when the cache content of the clients are different, andeach client request contents present at the other clients’ caches.The proposed LFU-Index policy maximizes the differencein the cache content across the clients, which significantlyimproves the index coding opportunities compared to theprevious policies. Finally, we can see that for very largevalues of M , the performance of LFU, LRU, and LFU-Indexapproaches the optimum Belady as the most of the requestsare served by the caches.VII. C ONCLUSIONS
In this paper, we proposed a system model to evaluate thegains of caching and index coding, as well as the tradeoffs be-tween them. We considered the case where an online cachingpolicy is employed at the user to generate the side informationfor index coding. We also proposed a index coding heuristicat the wireless edge station, and a cache replacement policy atthe user for the proposed system model. Through an extensivesimulation study, we presented the effect of rewatch factor( α ) and cache size ( M ) on the performance of the systemfor the cases where index coding is enabled and disabled.We also showed that the LFU-Index cache replacement policythat we propose performs better than the traditional LRUand LFU policies, because it improves both the index codingopportunities and the cache hits. The LFU-index policy isbeing built into the Wi-Cache system that is described in [14],[15] R EFERENCES[1] Cisco, “Cisco visual networking index: Global mobile data trafficforecast update, 2015-2020,” White Paper, 2015. [Online]. Available:http://goo.gl/tZ6QMk[2] N. Golrezaei, A. F. Molisch, A. G. Dimakis, and G. Caire, “Fem-tocaching and device-to-device collaboration: A new architecture forwireless video distribution,”
IEEE Commun. Mag. , vol. 51, no. 4, pp.142–149, April 2013.[3] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The roleof proactive caching in 5g wireless networks,”
IEEE Commun. Mag. ,vol. 52, no. 8, pp. 82–89, Aug. 2014. [4] M. Ji, G. Caire, and A. F. Molisch, “Wireless device-to-device cachingnetworks: Basic principles and system performance,”
IEEE J. Sel. AreasCommun. , vol. 34, no. 1, pp. 176–189, Jan. 2016.[5] D. Liu and C. Yang, “Energy efficiency of downlink networks withcaching at base stations,”
IEEE J. Sel. Areas Commun. , vol. 34, no. 4,pp. 907–922, April 2016.[6] K. Wang, Z. Chen, and H. Liu, “Push-based wireless converged net-works for massive multimedia content delivery,”
IEEE Trans. WirelessCommun. , vol. 13, no. 5, pp. 2894–2905, May 2014.[7] H. Ahlehagh and S. Dey, “Video-aware scheduling and caching in theradio access network,”
IEEE/ACM Trans. Netw. , vol. 22, no. 5, pp. 1444–1462, Oct. 2014.[8] B. D. Higgins, J. Flinn, T. J. Giuli, B. Noble, C. Peplin, and D. Watson,“Informed mobile prefetching,” in
Proc. 2012 ACM MobiSys , 2012, pp.155–168.[9] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online codedcaching,”
IEEE/ACM Trans. Netw. , vol. 24, no. 2, pp. 836–845, April2016.[10] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
IEEE Trans. Inf. Theory , vol. 60, no. 5, pp. 2856–2867, May 2014.[11] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching attainsorder-optimal memory-rate tradeoff,”
IEEE/ACM Trans. Netw. , vol. 23,no. 4, pp. 1029–1040, Aug 2015.[12] Y. Birk and T. Kol, “Coding on demand by an informed source (ISCOD)for efficient broadcast of different supplemental data to caching clients,”
IEEE Trans. Inf. Theory , vol. 52, no. 6, pp. 2825–2830, June 2006.[13] Q. Yan, U. Parampalli, X. Tang, and Q. Chen, “Online coded cachingwith random access,”
IEEE Commun. Lett. , vol. 21, no. 3, pp. 552–555,March 2017.[14] L. Chhangte, A. Garg, D. Manjunath, and N. Karamchandani, “Wi-cache: Towards an sdn based distributed content caching system inwlan,” in
Proc. 2018 COMSNETS , Jan 2018, pp. 503–506.[15] L. Chhangte, D. Manjunath, and N. Karamchandani, “An sdn basedcontent cache at the wifi edge,” in
Proc. 2018 ACM MobiCom , Oct.2018, pp. 672–674.[16] M. Lee, M. Ji, A. F. Molisch, and N. Sastry, “Throughputoutage analysisand evaluation of cache-aided d2d networks with measured popularitydistributions,”
IEEE Trans. Wireless Commun. , vol. 18, no. 11, pp. 5316–5332, Nov. 2019.[17] M. A. R. Chaudhry and A. Sprintson, “Efficient algorithms for indexcoding,” in
IEEE INFOCOM Workshops 2008 , April 2008, pp. 1–4.[18] S. Angelopoulos, R. Dorrigiv, and A. L´opez-Ortiz, “On the separationand equivalence of paging strategies,” in
Proc. 2007 ACM-SIAM , 2007,pp. 229–237.[19] D. Lee et al. , “Lrfu: a spectrum of policies that subsumes the leastrecently used and least frequently used policies,”
IEEE Trans. Comput. ,vol. 50, no. 12, pp. 1352–1361, Dec. 2001.[20] L. A. Belady, “A study of replacement algorithms for a virtual-storagecomputer,”