Non-Symmetric Coded Caching for Location-Dependent Content Delivery
Hamidreza Bakhshzad Mahmoodi, MohammadJavad Salehi, Antti Tölli
aa r X i v : . [ c s . I T ] F e b Non-Symmetric Coded Cachingfor Location-Dependent Content Delivery
Hamidreza Bakhshzad Mahmoodi, MohammadJavad Salehi and Antti T¨olli
Center for Wireless Communications, University of Oulu, 90570 FinlandEmail: { fist name.last name } @oulu.fi Abstract —Immersive viewing is emerging as the next inter-face evolution for human-computer interaction. A truly wire-less immersive application necessitates immense data deliverywith ultra-low latency, raising stringent requirements for next-generation wireless networks. A potential solution for addressingthese requirements is through the efficient usage of in-devicestorage and computation capabilities. This paper proposes anovel location-based coded cache placement and delivery scheme,which leverages the nested code modulation (NCM) to enablemulti-rate multicasting transmission. To provide a uniform qual-ity of experience in different network locations, we formulate alinear programming cache allocation problem. Next, based onthe users’ spatial realizations, we adopt an NCM based codeddelivery algorithm to efficiently serve a distinct group of usersduring each transmission. Numerical results demonstrate that theproposed location-based delivery method significantly increasestransmission efficiency compared to state of the art.
Index Terms —Coded Caching; Location-Dependent Caching;Immersive Viewing
I. I
NTRODUCTION
The current dominant interface for human-computer interac-tion is through mobile flat-screen devices such as smartphonesand tablets. The next interface evolution is expected to bringforward wireless immersive viewing experiences facilitatedby more capable wearable gadgets submerging users into thethree-dimensional (3D) digital world. However, such an evo-lution requires powerful and agile external radio connections.It imposes extremely stringent key performance indicators(KPIs) that are simply beyond what is possible with the currentnetworking standards [1]. This necessitates new techniquesthat leverage recent advances in communication, storage, com-puting, big data analysis and machine learning [2].In this regard, using cheap on-device storage for improvingbandwidth efficiency is considered a promising technique [3].In [4], it is shown that utilizing caching and computingcapabilities of mobile VR devices effectively alleviates thetraffic burden over the wireless network. In [5], a new cachingtechnique known as
Coded Caching (CC) is introduced. In CC,well-defined fragments of all the files in the library are storedin the cache memories, creating a cache-aided multicastinggain during the delivery phase and resulting in a globalcaching gain . The original CC scheme in [5] is extendedto multi-server networks in [6], and later to wireless multi-antenna systems in [7], [8]. A device-to-device (D2D) exten-sion of multi-antenna coded caching is also investigated in [9].
This work was supported by the Academy of Finland under grants no.319059 (Coded Collaborative Caching for Wireless Energy Efficiency) and318927 (6Genesis Flagship).
Meantime, various practical limitations of coded caching havebeen addressed by the research community. Most notably, thelarge subpacketization requirement, defined as the number ofsmaller parts each file should be split into, is addressed in [10],[11], while the effect of the subpacketization on the low-SNRrate is investigated in [12].A less-studied problem of coded caching schemes, affectingcontent delivery applications in general and immersive viewingapplications in particular, is the near-far issue. Due to theunderlying multicasting nature of coded caching schemes, theachievable rate in any multicast message is limited to therate of the user with the worst channel conditions. In [13],a congestion control technique is proposed to avoid servingusers in adverse fading conditions, while in [14] multipledescriptor codes (MDC) are utilized to serve ill-conditionedusers with a lower quality-of-experience. Unlike [13], [14],which are based on traditional XOR-ing of data elements,in [15] nested code modulation (NCM) is proposed to allowbuilding codewords that serve every user in the multicastinggroup with a different rate. This multi-rate property is achievedby altering the modulation constellation using side informationon the file library and other users’ requests. A similar multi-rate transmission scheme can also be found in [16]. However,the aforementioned approaches are not suitable for dynamicreal-time applications, where users frequently move inside thenetwork, and their achievable rate changes accordingly.This paper introduces a new location-dependent codedcaching scheme for efficient content delivery in wireless accessnetworks. We consider a wireless communication scenarioin which the users are free to move, and their requestedcontent depends on their current location. Furthermore, therequested content at each location is assumed to be of thesame size. As a specific use-case, we assume a multi-userimmersive viewing environment where a group of users issubmerged into a network-based immersive application thatruns on a high-end eye-wear. Such a use-case necessitatesheavy multimedia traffic and guaranteed user quality of ex-perience (QoE) throughout the operating environment. In thisregard, a location-dependent, uneven memory allocation iscarried out based on the attainable data rate at each givenlocation. Moreover, a novel multicast transmission schemewith an underlying NCM structure to support different datarates is devised to deliver the missing user-specific contentwithin the same multicast transmission. Due to the optimizedlocation-dependent cache placement, the worst-case deliverytime is minimized across all the locations.I. S
YSTEM M ODEL
We envision a bounded environment (game hall, operatingtheatre, etc.) in which a single-antenna server serves K single-antenna users through wireless communication links. Theset of users is denoted by K = { , ..., K } . The users areequipped with finite-size cache memories and are free to movethroughout the environment. At each time slot, every userrequests data from the server based on the application needsand its location. The requested data content can be divided intostatic and dynamic parts where the former can be proactivelystored in the user cache memories. This paper focuses onthe wireless delivery of the static location-dependent contentpartially aided by in-device cache memories. A real-worldapplication of this communication setup is a wireless immer-sive digital experience environment where the requested datais needed for reconstructing the location-dependent 3D field-of-view (FoV) at each user. The goal is to design a cache-aidedcommunication scheme that minimizes the maximum requireddelivery time to transmit all the requested data to the servedusers. In other words, the aim is to provide a uniform QoE,irrespective of the users’ location.Intuitively, a larger share of the total cache memory shouldbe reserved for storing data needed in locations where thecommunication quality is poor. We split the environmentinto S regions, such that all points in a given region havealmost the same distance from the server (i.e., can be servedapproximately with the same data rate). In the following, werefer to these regions as states and denote the set of statesas S . A graphical example of an application environmentwith its states is provided in Figure 1. The file required forreconstructing the FoV of state j ∈ S is denoted by W ( j ) . Weassume for every region j ∈ S , the size of W ( j ) is F bits, andevery user is equipped with a cache memory of size M F bits.For the sake of simplicity, we consider a normalized data unitand drop F in subsequent notations. Moreover, we considerthe delivery procedure in a specific time slot and ignore thetime index (the same procedure is repeated every time slot).We assume a wideband communication scheme, where thetotal bandwidth is divided into several small frequency bins.For simplicity, we assume that the attainable expected rate at aparticular location roughly depends on the transmitted powerand the distance between the location and the server. Thus,the expected data rate attained in state j ∈ S is expressed as ¯ r ( j ) = log(1 + P d − n ( j ) N ) , where P is the transmission power, N is the additive white Gaussian noise power, n is the path-loss component, and d ( j ) is the maximum distance from theserver to all the points in state j . Thus, the expected data We assume that a portion of the achievable data rate available at each useris dedicated to deliver the dynamic content without cache assistance. We need an estimation of the achievable rate at different states to performthe location-dependent cache placement. However, during the delivery phase,the communication can be performed with real achievable rates calculatedusing the available channel state information (CSI). In this paper, for notationaland analysis simplicity, we have assumed the proposed rate estimation in theplacement phase, based on the simple path loss model and line of sight (LOS)communications, is also valid throughout the delivery phase. Relaxing thisassumption is left for the extended version of the paper. served data
Fig. 1: An application environment with K = 3 users, split into S = 8 states. r ( j ) is the state-specific achievable rate and r (3) > r (2) > r (7) . X isthe multicast message, and Y i represents the data part intended for user i .The black bar below each user indicates how much of the requested data iscached, and ∗ denotes the variable-rate NCM operation. rate over all the frequency bins is approximated as ˆ r ( j ) ∼ B ¯ r ( j ) , where B is the communication bandwidth. For ease ofexposition, we consider normalized data rate, i.e., r ( j ) = ˆ r ( j ) F ,throughout this paper.III. L OCATION -D EPENDENT C ODED C ACHING
Similar to other centralized coded caching schemes, our newlocation-dependent scheme also works in two distinct phases,1) cache placement and 2) content delivery.
A. Cache Placement
During the placement phase, the users’ cache memories arefilled up with useful data to minimize the duration of thecontent delivery phase. However, the content placement phaseis carried out without any prior knowledge of users’ spatialdistribution in the delivery phase. As defined in Section II, weassume the application environment is split into S states, anda location-dependent multimedia file of size one normalizeddata unit is required to reconstruct the static FoV of each state.Before proceeding with the data placement, we first use a memory allocation process to determine the dedicated amountof cache memory for storing (parts of) W ( j ) at each user.Since there is no prior knowledge about the users’ spatialrealizations in the delivery phase, we minimize the maximumdelivery time for a single user assuming uniform accessprobability for all the states. Let us use m ( j ) to denote thenormalized cache size at each user allocated to storing (partsof) W ( j ) . Since the size of W ( j ) is normalized to one dataunit, a user in state j needs to receive − m ( j ) data units overthe wireless link to reconstruct the FoV of state j . As a result,data delivery to state j needs T ( j ) = − m ( j ) r ( j ) seconds. Hence,the memory allocation for minimizing the maximum deliverytime is found by solving the following linear program (LP): LP : min m ( j ) ,γ ≥ γ s.t. − m ( j ) r ( j ) ≤ γ, ∀ j ∈ S , X j ∈S m ( j ) = M. (1) Note that the achievable delivery time γ can be controlled through F and B to meet the application’s real-time requirements. In this regard, the solutionof (1) can determine the maximum achievable QoE of different states (i.e.,the maximum value of F such that the constraints are met), using bisectionover F (and/or B ). States R a t e [ b it s / H z / s ] M e m o r y % Fig. 2: Location-specific rate and memory distributions for Example 1. W j ) W j ) W j ) W j ) j = , user user user user W j ) W j ) W j ) W j ) W j ) W j ) j = , user user user user W j ) W j ) W j ) W j ) j = user user user user Fig. 3: Cache placement visualization for Example 1.
After the memory allocation process, we store data in thecache memories of the users following the same methodproposed in [5]. In this regard, for every j ∈ S we split W ( j ) into (cid:0) Kt ( j ) (cid:1) sub-files denoted by W V ( j ) ( j ) , where t ( j ) = Km ( j ) and V ( j ) can be any subset of the user set K with |V ( j ) | = t ( j ) . Then, at the cache memory of user i ∈ K , westore W V ( j ) ( j ) for every state j ∈ S and set V ( j ) ∋ i . Thecache placement procedure is outlined in Algorithm 1. Example 1.
Consider an application with K = 4 users, wherethe environment is split into S = 5 states and for each state,the required data size is F = 400 Megabytes. Each user has acache size of
Megabytes, and hence, the normalized cachesize is M = 2 . data units. The spatial distribution of theachievable rate (assuming B = F ) and its resulting memoryallocation are as shown in Figure 2. It can be easily verifiedthat t (1) = t (5) = 1 , t (2) = t (4) = 2 , and t (3) = 3 . As aresult, W (1) , W (3) and W (5) should be split into four sub-files, while W (2) and W (4) are split into six sub-files. Theresulting cache placement is visualized in Figure 3. As W ( j ) is split into (cid:0) Kt ( j ) (cid:1) sub-files and (cid:0) K − t ( j ) − (cid:1) sub-filesare stored in the cache memory of each user, the total memorysize dedicated to W ( j ) at each user is (cid:0) K − t ( j ) − (cid:1)(cid:0) Kt ( j ) (cid:1) = t ( j ) K = m ( j ) , (2)and hence, the proposed algorithm satisfies the cache sizeconstraints. Note that, in comparison with [5], here the re-quired files in each state are considered as a separate library,and the cache placement algorithm in [5] is performed foreach state independent of the others. Also, different from Here we assume for every j ∈ S , m ( j ) > and t ( j ) is an integer. Inthe next sections, it is briefly discussed what happens if these constraints arenot met. However, a detailed discussion is left for the extended version. Algorithm 1
Location-based cache placement procedure CACHE PLACEMENT { m ( j ) } = The result of the LP problem in (1) for all j ∈ S do t ( j ) = K × m ( j ) W ( j ) → { W V ( j ) ( j ) | V ( j ) ⊆ K , |V ( j ) | = t ( j ) } for all V ( j ) do for all i ∈ K do if i ∈ V ( j ) then Put W V ( j ) ( j ) in the cache of user i Algorithm 2
NCM-based Content Delivery procedure DELIVERY ˆ t = min i ∈K t i for all U ⊆ K : |U| = ˆ t + 1 do X U ← for all i ∈ U do α i ← (cid:0) t i ˆ t (cid:1) , Y U ,i ← , U − i ← U\{ i } for all V i ⊆ K : |V i | = t i + 1 do if U − i ⊆ V i , i
6∈ V i then W q V i ,i ← C HUNK ( W V i ,i , α i ) Y U ,i ← C ONCAT ( Y U ,i , W q V i ,i ) X U ← N EST ( X U , Y U ,i , r i ) Transmit X U the existing works, here, files of different locations havedistinct t ( j ) values, which should be carefully considered inthe delivery phase. For notational simplicity, throughout thepaper we ignore the brackets and separators while explicitlywriting V ( j ) , i.e., W i k ( j ) ≡ W { i,k } ( j ) . B. Content Delivery
At the beginning of the delivery phase, every user i ∈ K re-veals its requested file W i ≡ W ( s i ) . Note that W i depends onthe state s i where user i is located. The server then builds andtransmits several nested codewords, such that after receivingthe codewords, all users can reconstruct their requested files.From the system model, user i requires a total amount of onenormalized data unit to reconstruct W i . However, a subset ofthis data, with size m i ≡ m ( s i ) data units, is available in thecache of user i . Since each user might be in a different statewith distinct t ( j ) , the conventional delivery scheme of [5] is nolonger applicable, and a new delivery mechanism is requiredto achieve a proper multicasting gain.The new delivery algorithm for the proposed setup isoutlined in Algorithm 2. During the delivery phase, the servertransmits a nested codeword X U for every subset of users U with |U| = ˆ t + 1 , where ˆ t is the common cache ratio definedas ˆ t = min i ∈K t i , and t i ≡ t ( s i ) . From the placement phase,we recall that the file W i intended for user i ∈ U is split intosub-files W V ( s i ) ,i . Let us define U − i ≡ U \ { i } and consider auser k ∈ U − i . All the sub-files W V ( s i ) ,i for which k ∈ V ( s i ) are already available in the cache memory of user k . Thus,by transmitting (part of) every sub-file W V ( s i ) ,i for which U − i ⊆ V ( s i ) and i
6∈ V ( s i ) , a portion of W i is delivered touser i without causing any interference at other users k ∈ U − i . Note that we have used W ( j ) to represent the file required for reconstruct-ing the FoV of the state j , and W i to denote the file requested by user i . Thesame convention is used for all notations in the text. xample 2. Consider the network in Example 1, for whichthe cache placement is visualized in Figure 3. Assume in aspecific time slot, s = 1 , s = 2 , s = 4 , s = 5 . Denotingthe set of requested sub-files for user i with T i and assuming A ≡ W (1) , B ≡ W (2) , C ≡ W (4) , D ≡ W (5) , we have T = { A , A , A } , T = { B , B , B } , T = { C , C , C } , T = { D , D , D } . (3) Note that, the size of the sub-files of
A, B, C, D are , , , data units, respectively. The common cache ratio is ˆ t = 1 , andhence, during each transmission we deliver data to ˆ t + 1 = 2 users. Let us consider the case U = { , } . The codeword X is supposed to deliver a portion of the requested data to users and interference-free. This is done by including (parts of) A , B , B in X . As can be seen from Figure 3, user has B and B in its cache memory, and so it can remove them fromits received data and decode (part of) A ∈ T interference-free. Similarly, user can remove A using its cache contentsand decode (parts of) B , B ∈ T interference-free. Thecoding procedure is explained in more details in Example 3. As illustrated in Example 2, due to the proposed location-dependent cache placement, the value of t i (and hence, thelength of the transmitted sub-files) might be different forvarious users. Moreover, we may find more than one sub-fileto be transmitted to user i ∈ U . We address these issues byintroducing a normalizing factor α i = (cid:0) t i ˆ t (cid:1) and two auxiliaryfunctions C HUNK and C
ONCAT . Whenever t i > ˆ t for user i ∈ U , we first split every sub-file intended for user i into α i smaller chunks (denoted by W q V ( s i ) ,i in Algorithm 2). Then,we concatenate a selection of these chunks to create the partof X U intended for user i , represented by Y U ,i . The functionC HUNK ensures none of the chunks of a sub-file is sent twice,and the function C
ONCAT creates a bit-wise concatenation ofthe given chunks. The final codeword X U is created by nesting Y U ,i for every user i ∈ U . This is shown by the auxiliaryfunction N EST in Algorithm 2. Note that due to the nesting,every Y U ,i can be transmitted with rate r i ≡ r ( s i ) (c.f. [17]). Example 3.
Following Example 2, let us review how thecodeword X is built. This codeword includes (parts of) A , B , B , intended for users and . However, the size of A is , while B , B are both data units. To solve this issue,we use normalizing factors α = 1 and α = 2 , and build X = A ∗ C ONCAT ( B , B ) , (4) where the operator ( ∗ ) denotes the nesting operation andsuperscripts are used to differentiate various chunks of a sub-file. The nesting operation in X is performed such that A and C ONCAT ( B , B ) are delivered with rates r = 3 and r = 2 data units per second, respectively. As a result, thiscodeword is delivered in max( × , × ) = seconds.Similarly, as α = 2 and α = 1 , we have X = A ∗ C ONCAT ( C , C ) , X = A ∗ D ,X = C ONCAT ( B B ) ∗ C ONCAT ( C C ) ,X = C ONCAT ( B B ) ∗ D ,X = C ONCAT ( C C ) ∗ D , (5) where each transmission requires seconds. Thus, the totaldelivery time is seconds. In comparison, it can be seen thatthe delivery time would be doubled without multicasting. Theorem 1.
Using the proposed cache placement and contentdelivery algorithms, every user receives its requested data.Proof.
A user i ∈ K needs a total amount of one (normalized)data unit to reconstruct its requested file W i . If the user is instate s i , m ( s i ) data units are available in its cache memory,and hence it needs to receive − m ( s i ) = 1 − t i K data units fromthe server. As detailed in the delivery algorithm, user i receivesparts of its requested data in (cid:0) K − t (cid:1) transmissions. Considerone such transmission, where the codeword X U is transmittedto the users in U ∋ i . During this transmission, user i receives Y U ,i , which is built through concatenation of (cid:0) K − ˆ t − t i − ˆ t (cid:1) chunks,each with size / α i ( Kti ) = / ( Kti )( ti ˆ t ) data units. As a result, thetotal data size delivered to user i from the server is (cid:0) K − t (cid:1)(cid:0) K − ˆ t − t i − ˆ t (cid:1)(cid:0) Kt i (cid:1)(cid:0) t i ˆ t (cid:1) = K − t i K = 1 − t i K , (6)and the proof is complete.IV. P
ERFORMANCE A NALYSIS
We discuss the performance of the proposed scheme assum-ing that for every state j ∈ S , t ( j ) is a positive integer. Thenwe briefly review what happens if this assumption is relaxed. Lemma 1.
For every two states j, j ′ ∈ S , the result of thememory allocation problem in (1) satisfies − m ( j ) r ( j ) = 1 − m ( j ′ ) r ( j ′ ) , − m ( j ) r ( j ) = S − M P j ′ ∈S r ( j ′ ) . (7) Proof.
The first equality can be simply proved by contradic-tion. Using this equality, for the second one we can write − m ( j ) r ( j ) X j ′ ∈S r ( j ′ ) = X j ′ ∈S − m ( j ′ ) = S − M , (8)and the proof is complete.
Theorem 2.
Total delivery time of the proposed scheme,denoted by T m , is calculated as T m = K ˆ t + 1 S − M P j ∈S r ( j ) . (9) Proof.
A user i ∈ K receives − m ( s i ) data units from theserver through n = (cid:0) K − t (cid:1) transmissions. However, as thesame procedure is followed to build Y U ,i for every U ∋ i ,the data size delivered to user i is the same for all thesetransmissions. Moreover, this user is always served with therate r i , and hence, it needs T i = n − m ( s i ) r ( i ) seconds toreceive every data part. Using Lemma 1, we can rewrite T i as T i = n S − M P j ∈S r ( j ) , which means the time required for allthe transmissions is the same and independent of the userselection. As in the delivery phase there exist a total numberof (cid:0) K ˆ t +1 (cid:1) transmissions, the total delivery time is T m = (cid:0) K ˆ t +1 (cid:1)(cid:0) K − t (cid:1) S − M P j ∈S r ( j ) = K ˆ t + 1 S − M P j ∈S r ( j ) , (10)and the proof is complete. The number of users ( K ) D e li v e r y ti m e [ m s ] TuTxTm
Fig. 4: Average delivery time vs the user count ( K ), M = S . . . . . M [Data Units] P e rf o r m a n ce R a ti o TuTxTuTm
Fig. 5: Relative gain in delivery time vs thecache size ( M ), K = 10 , S = 121 . . . . . M [Data Units] P e rf o r m a n ce R a ti o TuTxTuTm
Fig. 6: Relative gain in delivery time vs thecache size ( M ), K = 10 , S = 441 . Theorem 3.
Compared with unicasting, the proposed schemeenables a reduction in the delivery time by a factor of ˆ t + 1 .Proof. In case of unicasting, a user i gets − m ( s i ) data unitswith the rate r i , and hence, the total delivery time is equal to T u = P i ∈K − m ( s i ) r i . However, using Lemma 1 we have T u = X i ∈K S − M P j ∈S r ( j ) = K S − M P j ∈S r ( j ) , (11)which is ˆ t + 1 times larger than the T m value in (9). Corollary 1.
The proposed scheme becomes more efficient if ˆ t is larger. This happens, for example, when more users arelocated in states with poor channel conditions. It can be shown that, if all t ( j ) values are not integers, all theresults provided in this section still hold. However, if m ( j ) = 0 for some j ∈ S , the reduction factor in delivery time, proposedin Theorem 3, is no longer valid. This is because, in such acase, it is not always possible to deliver all the required datathrough multicasting (some missing parts should be deliveredby unicasting). Still, it can be shown that the performance gapof the proposed scheme with the optimal solution would bebounded. Due to the lack of space, a detailed discussion onthese results is left for the extended version of this paper.V. S IMULATION R ESULTS
We use numerical simulations to evaluate the performanceof the proposed location-dependent scheme. For simulations,we consider a m × m square room where the transmitteris located in the middle of the room at a height of m . Thetransmitter power is , the room area is split into S = 121 states, and the users are uniformly distributed over the states.We compare the delivery time of the proposed scheme withunicasting and the baseline scheme of [8]. These delivery timesare denoted by T m , T u and T x , respectively. For the schemeof [8], m ( j ) = MS for every state j ∈ S , and the multicastrate for a user set U is limited by min i ∈U r ( s i ) .In Figure 4, we have compared the proposed location-dependent scheme with unicast transmission, for differentvalues of the user count parameter K . Clearly, the newscheme not only reduces the delivery time, but also makes it(almost) independent of K . This independence is justified byTheorem 2, which states that for integer ˆ t , T m is proportional to K ˆ t +1 . However, as ˆ t = min u i ∈K t i and t i = Km ( s i ) , as K becomes larger, the value of T m approaches a constant value.In Figure 5, the same comparison is done for differentvalues of the cache size parameter M . It can be seen thatthe relative transmission gain T u T m increases almost linearlywhen M > . . This is because for M < . , there existsat least one state j ∈ S for which t ( j ) < , and hence, theefficiency of the proposed scheme is reduced as multicasting isnot possible when some users are located in such states. On theother hand, it can be seen that the proposed scheme performsworse than the baseline scheme of [8] when M is small. Thisis because our proposed scheme sacrifices the global cachinggain for a higher local caching gain. While this may notseem appealing, it results in more resilient performance if theratio between the best and worst channel conditions is large.This is investigated in Figure 6, where we have repeated thesimulations for a larger room area of m × m , split into S = 441 states. Clearly, in this case, the proposed location-dependent scheme outperforms the baseline scheme of [8] witha good margin, confirming its resilience to the existence of ill-conditioned states.VI. C ONCLUSIONS AND F UTURE W ORK
In this paper, we proposed a centralized, location-dependentcoded caching scheme, tailored for future immersive viewingapplications. For the placement phase, we use a memoryallocation process to allocate larger cache portions where thechannel condition is poor, and during the delivery phase, weuse nested code modulation to support different data rateswithin a single multicast transmission. The resulting schemeprovides an (almost) uniform QoE throughout the applicationenvironment, and performs better than the state of the art inill-conditioned scenarios where the ratio between the best andworst channel conditions is large.The proposed location-dependent scheme can be extendedin various directions. An extension to support multi-antennacommunication setups is currently in progress. Other notableresearch opportunities include supporting multiple transmit-ters, incorporating side information on user movement patternsand state transition probabilities, and considering a moredynamic scenario where the users’ cache content is updatedas they move through the application environment.
EFERENCES[1] L. Han, S. Appleby, and K. Smith, “Problem statement: Transportsupport for augmented and virtual reality applications,”
Working Draft,IETF Secretariat, Internet-Draft draft-haniccrg-arvr-transport-problem-00, March , 2017.[2] E. Bastug, M. Bennis, M. M´edard, and M. Debbah, “Toward intercon-nected virtual reality: Opportunities, challenges, and enablers,”
IEEECommunications Magazine , vol. 55, no. 6, pp. 110–117, 2017.[3] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The roleof proactive caching in 5G wireless networks,”
IEEE CommunicationsMagazine , vol. 52, no. 8, pp. 82–89, 2014.[4] Y. Sun, Z. Chen, M. Tao, and H. Liu, “Communications, caching, andcomputing for mobile virtual reality: Modeling and tradeoff,”
IEEETransactions on Communications , vol. 67, no. 11, pp. 7573–7586, 2019.[5] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
IEEE Transactions on Information Theory , vol. 60, no. 5, pp. 2856–2867, 2014.[6] S. P. Shariatpanahi, S. A. Motahari, and B. H. Khalaj, “Multi-servercoded caching,”
IEEE Transactions on Information Theory , vol. 62,no. 12, pp. 7253–7271, 2016.[7] S. P. Shariatpanahi and B. H. Khalaj, “On Multi-Server Coded Cachingin the Low Memory Regime,” arXiv preprint arXiv:1803.07655 , pp.1–12, 2018. [Online]. Available: https://arxiv.org/pdf/1803.07655.pdf[8] A. Tolli, S. P. Shariatpanahi, J. Kaleva, and B. H. Khalaj, “Multi-antennainterference management for coded caching,”
IEEE Transactions onWireless Communications , vol. 19, no. 3, pp. 2091–2106, 2020.[9] H. B. Mahmoodi, J. Kaleva, S. P. Shariatpanahi, and A. Tolli, “D2D As-sisted Multi-antenna Coded Caching,” arXiv preprint arXiv:2010.05459 ,2020.[10] E. Lampiris and P. Elia, “Adding transmitters dramatically boosts coded-caching gains for finite file sizes,”
IEEE Journal on Selected Areas inCommunications , vol. 36, no. 6, pp. 1176–1188, 2018.[11] M. Salehi, E. Parrinello, S. P. Shariatpanahi, P. Elia, and A. T¨olli,“Low-Complexity High-Performance Cyclic Caching for Large MISOSystems,” arXiv preprint arXiv:2009.12231 , 2020.[12] M. Salehi, A. Tolli, S. P. Shariatpanahi, and J. Kaleva,“Subpacketization-rate trade-off in multi-antenna coded caching,”in . IEEE, 2019, pp. 1–6.[13] A. Destounis, A. Ghorbel, G. S. Paschos, and M. Kobayashi, “AdaptiveCoded Caching for Fair Delivery over Fading Channels,”
IEEE Trans-actions on Information Theory , 2020.[14] M. Salehi, A. Tolli, and S. P. Shariatpanahi, “Coded Caching withUneven Channels: A Quality of Experience Approach,” in , 2020, pp. 1–5.[15] A. Tang, S. Roy, and X. Wang, “Coded caching for wireless backhaulnetworks with unequal link rates,”
IEEE Transactions on Communica-tions , vol. 66, no. 1, pp. 1–13, 2017.[16] Z. Chen, H. Liu, and W. Wang, “A novel decoding-and-forward schemewith joint modulation for two-way relay channel,”
IEEE Communica-tions Letters , vol. 14, no. 12, pp. 1149–1151, 2010.[17] S. Tang, H. Yomo, T. Ueda, R. Miura, and S. Obana, “Full rate networkcoding via nesting modulation constellations,”