Subpacketization-Rate Trade-off in Multi-Antenna Coded Caching
MohammadJavad Salehi, Antti Tölli, Seyed Pooya Shariatpanahi, Jarkko Kaleva
aa r X i v : . [ c s . I T ] M a y Subpacketization-Rate Trade-offin Multi-Antenna Coded Caching
MohammadJavad Salehi ∗ , Antti T¨olli ∗ , Seyed Pooya Shariatpanahi † , Jarkko Kaleva ∗∗ Center for Wireless Communications, University of Oulu, Oulu, Finland. † School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran. { fist name.last name } @oulu.fi, [email protected] Abstract —Coded caching can be applied in wireless multi-antenna communications by multicast beamforming coded datachunks to carefully selected user groups and using the existingfile fragments in user caches to decode the desired files at eachuser. However, the number of packets a file should be splitinto, known as subpacketization, grows exponentially with thenetwork size. We provide a new scheme, which enables the levelof subpacketization to be selected freely among a set of predefinedvalues depending on basic network parameters such as antennaand user count. A simple efficiency index is also proposed as aperformance indicator at various subpacketization levels. Thenumerical examples demonstrate that larger subpacketizationgenerally results in better efficiency index and higher symmetricrate, while smaller subpacketization incurs significant loss in theachievable rate. This enables more efficient caching schemes,tailored to the available computational and power resources.
Index Terms —multi-antenna coded caching, multicast beam-forming, flexible subpacketization
I. I
NTRODUCTION
Global mobile data traffic has been subject to consistentstrong growth during the recent years. Cisco visual networkingindex [1] estimates the global mobile traffic to reach 77Exabytes per month by 2022, more than six-fold increasecompared to 2017. On the other hand, more than 80% ofthis number is expected to be generated by mobile video, andas a result research efforts in efficient video delivery havebeen intensified in recent years. Caching content closer to theend-users, known as
Edge Caching , is among the promisingsolutions proposed in this direction [2], [3]. Coded caching,first appeared in the pioneering work of [4], is an interest-ing approach to edge caching for wireless communications,through which one can take benefit of the broadcast natureof wireless channels. It is shown in [4] that caching at end-user locations and multicasting coded file chunks to carefullyselected user groups enables reduction in delivery bandwidthover the broadcast channel.The original scheme of [4] assumes a separate, offlinecache placement phase and an error-free broadcast channel tothe end-users. Since then, significant effort has been carriedout by the research community to apply the coded cachingconcept to more general network setups. For example, in [5]–[7] decentralized, online and hierarchical coded caching are
This work was supported by the Academy of Finland under grants no.319059 (Coded Collaborative Caching for Wireless Energy Efficiency) and318927 (6Genesis Flagship). considered, respectively. Similarly, extension of [4] to multi-server scenario is studied in [8], and the same concept is thenapplied to multi-antenna wireless communications in [9]–[12].We focus on the application of coded caching in multi-antenna wireless communications and the well-known problemof subpacketization, which means the number of packetseach file should be split into, for a caching scheme to workproperly [12]. The very large subpacketization required byschemes presented in [9]–[11] makes them infeasible for evenmoderate-sized networks, and at the same time adds consid-erably to the complexity of beamformer design. Reducingsubpacketization is partially tackled in [12], by assigning usersto fixed size groups and using zero-forcing (ZF) and finite fieldsummation for intra- and inter-group interference cancellation,respectively. However, as will be discussed, this scheme alsoresults in rate loss compared to [10], [11].In this paper, we provide a new coded caching scheme,which enables the selection of a desired subpacketization levelamong a set of possible values depending on basic networkparameters such as user count, number of available antennasand the global cache ratio (as defined in [4]). We show that theschemes provided in [9] and [12] are two extreme cases of ourscheme, built with the highest and lowest possible subpacketi-zation levels, respectively. We also provide a simple efficiencyindex, which is an indicator of how well the performance (interms of symmetric rate) will be for a specific subpacketizationlevel. We show that larger subpacketization usually resultsin higher efficiency index; and higher communication rateaccordingly. Particularly, the schemes of [10] and [12] providethe highest and lowest symmetric rate, and the rate differenceis more considerable at low-SNR regime.In this paper, we use the following notations. Boldfacelower case and capital letters indicate vectors and matrices,respectively. Sets are shown with calligraphic letters. For apositive integer K , [ K ] means the set { , , ..., K } . For twosets S and T , |S| indicates the number of elements in S and S\T is the set of elements in S which are not in T . For anyvector u , k u k is the second norm of u .II. S YSTEM M ODEL AND L ITERATURE R EVIEW
We consider K users in a MISO wireless channel with L antennas at the transmitter. Each user has a cache memoryof size M F bits. The cache contents are updated duringhe placement phase, which can be for example during off-peak hours. During the delivery phase, users request filesfrom a library of size N , where each file is of size F bits.In order to fulfill the requests, each requesting node usesits cache content together with the data received over thecommunication channel. The problem is to design placementand delivery phases, such that the time needed for all users todecode their requested files is minimized; or equivalently, thesymmetric rate of all users is maximized.In [4] it is shown that a global caching gain, proportional tothe total cache size available in the network, is achievable inaddition to the local gain of each single cache memory. Thisadded gain is obtained by multicasting coded data chunks tomultiple cache-enabled users, so we use the term multicastinggain interchangeably with global caching gain in this paper.Defining global cache ratio t = KMN , the scheme in [4]provides a multicasting gain of t + 1 . In [8] it is shownthat using L servers simultaneously, the multicasting gainincreases to t + L . In [9] this result is applied to the wirelesscommunication scenario with L antennas at the transmitter.Here we briefly review the scheme of [9], as it shares manycommon parts with the new scheme presented in this paper.The placement phase of [9] requires each file W to be firstsplit into (cid:0) Kt (cid:1) equal-sized packets W p ( T ) , where p ( T ) assignsa unique index for each T ⊆ [ K ] with |T | = t . Each packetis then further split into Q = (cid:18) K − t − L − (cid:19) (1)equal-sized subpackets W qp ( T ) ; and each W qp ( T ) is stored atthe cache memory of all users k ∈ T . During the deliveryphase, for each S ⊆ [ K ] with |S| = t + L a separate vector x ( S ) is transmitted. Assume user k requests the file W ( k ) .For simplicity, we ignore the time index for vectors x ( S ) sentin TDMA fashion. To create x ( S ) , first for each V ⊆ S with |V| = t + 1 , codeword X ( V ) is built as X ( V ) = M k ∈V W q ( k, V ) p ( V\{ k } ) ( k ) , (2)where ⊕ denotes bit-wise XOR and q ( k, V ) is defined suchthat it is guaranteed that each subpacket is transmitted onlyonce and also after all transmissions are concluded, each useris able to decode its requested file (c.f. [9]). Next, vector u ( V ) is defined such that k u ( V ) k = 1 and u ( V ) ⊥ h k ∀ k ∈ S\V , (3)where ⊥ denotes perpendicularity and h k is the L × channelvector from base station to user k . Finally, x ( S ) is built as x ( S ) = X V⊆S u ( V ) p ( V ) X ( V ) , (4)where p ( V ) is the power allocated to the transmission ofcodeword X ( V ) . It can be verified that by transmitting x ( S ) ,each user k ∈ S is able to decode part of W ( k ) using its cachecontents, and after concluding successive transmissions for all S ⊆ [ K ] , all users in the system are able to completely decodetheir requested files. Specifically, after a single transmission x ( S ) is concluded, each user k ∈ S faces a MAC channel with (cid:0) t + L − t (cid:1) terms, and the transmission rate and power shouldbe adjusted such that simultaneous decoding of all terms ispossible at every user.Although the scheme of [9] achieves the full multicastinggain of t + L for each transmission, it has major drawbacks.First, the required subpacketization grows exponentially with K , making the scheme infeasible for even moderate values of K [12]. Second, ZF results in poor rate specially in low-SNRregime [11]. Finally, the complexity of successive interferencecancellation for decoding all terms in the MAC channel growsexponentially with the MAC size [10]. Although workaroundsfor drawbacks are available in the literature [10]–[12], un-fortunately improvements for any drawback causes the othersto become worse. As a quick review, the scheme providedin [12] divides users into groups of size L and uses ZF andfinite field summation to mitigate both intra- and inter-groupinterference. Although this solution results in a dramatic dropin the required subpacketization (specially if tL is an integer),it provides a lower symmetric rate than other schemes, asdemonstrated later in Section IV. On the other hand, thescheme of [11] suggests designing optimized multicast beam-formers instead of ZF to improve the performance at low- andmid-SNR. More precisely, instead of (4) x ( S ) is created as x ( S ) = X V⊆S w ( V ) X ( V ) , (5)where w ( V ) is the general beamformer vector. The unwantedterms then appear as (Gaussian) interference instead of beingnulled at each user. Although this formulation significantlyimproves the rate at low-SNR, considering interference be-tween parallel multi-group multicast messages makes thebeamformer design significantly more complex [10]. Finally,different schemes with reduced complexity are proposed in[10]. However, in certain scenarios, this may have furtherdetrimental impact on the required subpacketization.In this paper we provide a new scheme, achieving fullmulticasting gain of t + L with subpacketization P × Q , where Q is defined in (1) and P can be selected freely among a set ofpredefined values. We show that the schemes of [9] and [12]are two extreme cases of this new scheme, with P = (cid:0) Kt (cid:1) and P = (cid:0) K/Lt/L (cid:1) respectively .Before providing the details, let us consider a network with K = 4 and L = t = 2 . The scheme provided in [10],[11] requires subpacketization P = (cid:0) (cid:1) = 6 , while the onepresented in [12] reduces subpacketization to P = (cid:0) (cid:1) = 2 .Using the new scheme, for this network one can build an-other coded caching scheme with subpacketization P = 4 .Comparison of the rate versus SNR for these three schemesis shown in Figure 1. It should be noted that the results inFigure 1 are calculated using optimized beamformer design,which provides better rate than ZF used in [12]. Still, it isclear that increasing P results in an increase in the symmetricrate. The rate advantage of P = 4 over P = 2 is at 0dB, eventually decreasing to at 40 dB. This illustrates thestronger impact of increased subpacketization at lower SNR. For non-integer
K/t and t/L , ⌊ K/t ⌋ and ⌈ t/L ⌉ are used.
10 20 30 402468
SNR [dB] S y mm e t r i c R a t e [ n a t s / s ] Fig. 1: Rate vs SNR, Optimized Beamformer - K = 4 , t = L = 2 III. S
UBPACKETIZATION -S ELECTIVE S CHEME
First we consider the special case of K = t + L , and thenprovide an extension to more general setups. A. Cache Placement Scheme
Assume K = t + L and each file W is divided into P equal-sized parts W p . We define a binary placement matrix V of size P × K and represent its elements by v p,k , where p ∈ [ P ] and k ∈ [ K ] . If v p,k = 1 , then for any file W inthe file library, we store W p in the cache memory of user k . For example consider the three curves in Figure 1. Thecorresponding placement matrix for P = 2 is V = (cid:20) (cid:21) , (6)while for P = 4 and P = 6 we have used V ′ = , V ′′ = , (7)respectively. Denoting the cache content of user k with Z ( k ) and considering the file library { A, B, C, D } , we have Z (1) = { A , B , C , D } Z ′ (1) = { A , A , B , B , C , C , D , D } Z ′′ (1) = { A , A , A , B , B , B , C , C , C , D , D , D } corresponding to V , V ′ and V ′′ , respectively. We say V is a valid placement matrix, if its rows are different and X p v p,k = P tK ∀ k ∈ [ K ] (8a) X k v p,k = t ∀ p ∈ [ P ] (8b)Note that (8a) represents the cache size constraint at eachuser; i.e. as the size of each packet is FP and there exists N filesin the library, the required cache size at each user becomes X p v p,k × N × FP = PK × KMN × N FP = M F , (9)which is exactly the available cache size at each user. More-over, (8b) guarantees that each packet is replicated for thesame number of times throughout all cache memories in thenetwork. For example, it can be easily checked that all theplacement matrices in (6) and (7) are valid. According to (8a)and (8b) it is clear that reordering rows or columns in a validplacement matrix results in another valid placement matrix.The following lemmas represent the relationship between validplacement matrices and caching schemes of [9] and [12].
Lemma 1. If K = t + L , the cache placement scheme of [9]is equivalent to a valid placement matrix with P = (cid:0) Kt (cid:1) .Proof. Consider a valid placement matrix V with P = (cid:0) Kt (cid:1) rows and K columns. According to the validity condition (8b),there exist exactly t non-zero elements at each row of V . Asa result, the rows of V include all possible combinations of t non-zero elements at K positions. Labelling each row with aset T consisting of column indices of non-zero elements in thatrow and using the same index p ( T ) for packets as describedin section II, we reach the same placement of [9]. Lemma 2. If KL and tL are integers, then the cache placementscheme of [12] is equivalent to a valid placement matrix with P = (cid:0) K/Lt/L (cid:1) .Proof.
Similar to Lemma 1, follows from labeling rows in avalid placement matrix with P = (cid:0) K/Lt/L (cid:1) .Lemmas 1 and 2 provide higher and lower boundaries forsubpacketization, respectively. Any other (cid:0)
K/Lt/L (cid:1) < P < (cid:0) Kt (cid:1) ,for which validity conditions hold, provides a possible cacheplacement with a different subpacketization value. B. Delivery Scheme
Assume cache placement is performed according to a givenvalid placement matrix V . At the start of the delivery phase,each user k reveals its requested file W ( k ) . For each V ⊆ [ K ] with |V| = t + 1 we create a codeword X ( V ) such that allusers in V can decode part of their missing data (the parts forwhich the corresponding v p,k is zero) with the help of X ( V ) .Defining the corresponding packet set of V as Φ( V ) = { p ∈ [ P ] | v p,k = 0 , ∀ k ∈ [ K ] \V} , (10)the codeword X ( V ) is built as X ( V ) = M k ∈V p ∈ Φ( V ) (1 − v p,k ) W p ( k ) . (11)As a brief explanation, according to the placement condi-tion (8b), each row of V has exactly t non-zero and K − t = L zero elements. Also as we have L antennas, it is possible tonull (or suppress) X ( V ) at L − users. The set Φ( V ) containspacket indices for which the corresponding v p,k is zero for all L − users in [ K ] \V , and so for each p ∈ Φ( V ) , there existsexactly one k ∈ V such that v p,k = 0 . This enables each user ∈ V to remove unwanted terms from X ( V ) using its cachecontents, and decode its missing packet accordingly.Nulling (or suppressing) each X ( V ) at all L − usersin [ K ] \V also enables us to transmit all codewords simul-taneously, thus achieving the full multicasting gain of t + L .Defining u ( V ) such that k u ( V ) k = 1 and u ( V ) ⊥ h k for all k ∈ [ K ] \V , we can build the transmission vector as in (4).Alternatively, instead of ZF we can use the optimized beam-former as in (5). In this case, X ( V ) is considered as Gaussianinterference instead of being nulled at users k ∈ [ K ] \V , andthe resulting symmetric rate will be higher [10], [11].According to the above explanation, all users in V candecode part of their requested data after receiving X ( V ) ; andas each missing data part is addressed in one set V , building x as (4) enables all users to decode every missing data partand completely decode their requested files (in Example 1this procedure is reviewed for a simple network). It should benoted that each user faces a MAC channel after transmissionof x , and transmission rate and power should be adjusted thatsimultaneous decoding of all missing data parts is possible. C. Efficiency Index
Assume the transmission vector x is built as in (4) . After x is transmitted, user k receives y ( k ) = h Tk x + z k (12)where z k is the additive Gaussian noise at user k . For eachconstituting term of x , one of the following is possible:1) k
6∈ V : h k ⊥ u ( V ) and hence the term X ( V ) is nulled(or suppressed by w ( V ) ) at user k ;2) k ∈ V and ∃ p ∈ Φ( V ) | v p,k = 0 : the term X ( V ) is received, from which the user k can then decode thedesired part W p ( k ) ;3) k ∈ V and ∄ p ∈ Φ( V ) | v p,k = 0 : the term X ( V ) is received at user k , but does not contain any missingpacket for user k and so is entirely removed using itscache contents.The third possibility can be interpreted as a case of powerloss , as the power used to transmit the deleted terms is not usedfor increasing the signal quality at that particular user. This isthe reasoning behind our definition of the efficiency index.Let θ ( k ) denote the number of terms completely removed bycache contents of user k . Then the efficiency index Γ( k ) is Γ( k ) = 1 − θ ( k ) φ ( x ) , (13)where φ ( x ) is the total number of terms transmitted intransmission vector x . Example 1.
Consider the cache placement matrix V ′ in (7)and assume the demand set is { A, B, C, D } . After transmis-sion of x is concluded, user 1 receives y (1) = ( B ⊕ D ) h T u p + ( A ⊕ C ) h T u p +( B ⊕ D ) h T u p + ( A ⊕ C ) h T u p + z , (14) For notational convenience we assume ZF beamforming, but the followingdiscussion also holds for the optimized beamformer (5). For x built as in (5), z k would also contain Gaussian interference termsfrom all w ( V ) X ( V ) | k
6∈ V [10], [11]. where u k = u ( V k ) , p k = p ( V k ) and V k = [4] \{ k } (as t =2 , each V includes three users). Considering RHS of (14), B ⊕ D in nulled (as u ⊥ h ), while A ⊕ C and A ⊕ C contain useful data ( A and A ; C and C are removed bythe cache). However, B ⊕ D is entirely removed using thecache content of user 1. Therefore, φ ( x ) = 4 , θ (1) = 1 andthe efficiency index of user 1 becomes Γ(1) = 0 . . It can beeasily checked that the efficiency index of other users is thesame and independent of the request vector. D. Building the Placement Matrix
In order to build valid placement matrices, we introduce placement blocks . Denoted by Λ Kt ( p ) , a placement block ofsize p for K users with global memory ratio t is a p × K binary matrix, for which:1) Placement validity conditions (8a) and (8b) are met;2) Each row can be transformed into any other row usingcircular shift operations.For example, V and V ′ in (6) and (7) are Λ (2) and Λ (4) ,respectively. It should be noted that Λ Kt ( p ) is not necessarilyunique (examples will be provided in Section IV). Moreover,it can be easily checked that the column-wise concatenation oftwo (or more) placement blocks results in a valid placementmatrix. For example, concatenation of V and V ′ results in V ′′ as mentioned in (7), which is a valid placement matrix.For general K and t values, one can easily build a placementblock by considering a random × K vector with t non-zeroelements as the first row of the block, and building each newrow by circular shifting previous row for one unit, until thefirst row is repeated. The resulting matrix is a placement block.These placement blocks can then be concatenated freely tobuild valid placement matrices with different P values. E. Extension to
K > t + L So far we have only considered networks with K = t + L .If K < t + L , as the number of users falls below themaximum multicasting gain, providing an appropriate codedcaching scheme becomes very straightforward. Specifically,any solution for a network with L ′ = K − t antennas worksfor the same network with L > L ′ antennas. Moreover, asmentioned in [10], [11] the extra antennas provide increasedflexibility and gain for the multicast beamformer design.In case K > t + L the solution is not trivial. Defining Q as (1) we provide a scheme with total subpacketization P × Q , where P can be any number for which a valid placementmatrix of dimensions P × K and global cache ratio t exists.During the placement phase each file W is first split into P packets denoted by W p , and then each packet is further splitinto Q subpackets W qp . Assume V is the given valid placementmatrix and W is a file in the library. For each k ∈ [ K ] and p ∈ [ P ] , if v p,k = 1 we store W qp for all q ∈ [ Q ] in the cachememory of user k .For delivery, we select all subsets S ⊂ [ K ] with |S| = t + L . For each S , we generate matrix V S of size P × K andelements v S p,k , using the procedure of Alg. 1. V S is then usedas the input to Algorithm 2, which executes exactly the samerocedure described in Section III-B, to build the transmissionvector x ( S ) . Algorithm 1
Generate V S procedure N EW P LACEMENT ( V , S ) S ′ ← [ K ] \S , Φ( S ′ ) ← for all p ∈ [ P ] do if v p,k = 1 , ∀ k ∈ S ′ then Φ( S ′ ) ← Φ( S ′ ) ∪ { p } for all p ∈ Φ( S ′ ) , k ∈ [ K ] do v S p,k ← for all p ∈ [ P ] , k ∈ S ′ do v S p,k ← for all p ∈ [ P ] \ Φ( S ′ ) , k ∈ S do v S p,k ← v p,k Algorithm 2
Build Transmission Vector x ( S ) procedure B UILD
TX( S , V S , { W (1) , W (2) , ..., W ( K ) } ) x ( S ) ← for all V ⊆ S with |V| = t + 1 do Φ( V ) ← ∅ for all p ∈ [ P ] do if v p,k = 0 , ∀ k ∈ [ K ] \V then Φ( V ) ← Φ( V ) ∪ { p } X ( V ) ← for all p ∈ Φ( V ) , k ∈ V do if v S p,k = 0 then X ( V ) ← X ( V ) ⊕ W q ( W ( k ) ,k ) p ( k ) q ( W ( k ) , k ) ← q ( W ( k ) , k ) + 1 x ( S ) ← x ( S ) + X ( V ) × u ( V ) × p ( V ) Note that V S is generated such that no term in x ( S ) needsto be nulled (or suppressed) at more than L − users. InAlgorithm 2, the initial values of superscripts q ( n, k ) are allset to one. Example 2.
Assume K = 5 , t = L = 2 and the file library is { A, B, C, D, E } . The scheme of [9] requires subpacketization (cid:0) (cid:1)(cid:0) (cid:1) = 20 , but using the cache placement matrix V = , (15)the required subpacketization is reduced to × (cid:0) (cid:1) = 10 . Forthe example demand set { A, B, C, D, E } we need (cid:0) (cid:1) = 5 consecutive transmissions, which are built as x ( S ) = ( D ⊕ B ) u + ( E ⊕ C ) u + E u + B u x ( S ) = ( E ⊕ C ) u + ( A ⊕ D ) u + A u + C u x ( S ) = ( A ⊕ D ) u + ( E ⊕ B ) u + B u + D u x ( S ) = ( E ⊕ B ) u + ( C ⊕ A ) u + C u + E u x ( S ) = ( C ⊕ A ) u + ( D ⊕ B ) u + D u + A u where S k = [ K ] \{ k } , u k is defined such that k u k k = 1 and u k ⊥ h k , and we have ignored power coefficients fornotational simplicity. It can be easily checked that after allthe transmissions are concluded, all users can decode theirrequested files. IV. S IMULATION R ESULTS
We provide simulation results for two network setups, withparameters shown in Table I. In order to build valid placementmatrices we use placement block concatenation, as describedin Section III-D. Due to lack of space we explicitly explainthe procedure for Network 1 only. For this network, there existtwo placement blocks of size × , which are V = , V = , and another placement block of size 3: V = . V and V are confirming examples that placement blocks ofa given size are not necessarily unique. Using V , V , V andtheir concatenations, one can create valid placement matriceswith P values mentioned in Table I . For example, P = 9 canbe created by concatenating either V or V with V , and P = 12 can be created by concatenating V and V . The case P = 15 is equivalent to the scheme of [9] and is created byconcatenating all the three blocks. It is worth mentioning thatwhile the scheme presented in [12] is not readily applicablefor Network 1 (as KL is not an integer), our scheme easilyprovides a cache placement with subpacketization P = 3 . Network 1 Network 2 K t L P values { , , , , } { , , , , } TABLE I: Simulation Parameters
In Figures 2 and 3 we have plotted simulation results forNetworks 1 and 2, respectively. For simulations we have usedoptimized beamformers as in (4) [10]. Moreover, efficiencyindices corresponding to different P values for both networksare presented in Table II. It can be verified that highersubpacketization usually results in better efficiency index;which in turn means higher symmetric rate. Also, as the slopesare the same for all curves in Figures 2 and 3, it is clear thatselection of P is not affecting the multicasting gain. Although for Network 1 all possible P values are mentioned, the P valuesconsidered for Network 2 are a subset of all possible values. For example,there exists a placement with P = 8 , which is not considered. There exist special cases for which higher subpacketization reduces theefficiency index. This is not covered here due to lack of space.
10 20 30 402468
SNR [dB] S y mm e t r i c R a t e [ n a t s / s ] P = 15 P = 12 P = 9 P = 6 P = 3 Fig. 2: Rate vs SNR, Optimized Beamformer - K = 6 , t = 2 , L = 4 SNR [dB] S y mm e t r i c R a t e [ n a t s / s ] P = 20 P = 18 P = 12 P = 6 P = 2 Fig. 3: Rate vs SNR, Optimized Beamformer - K = 6 , t = 3 , L = 3 Network 1 P = 3 P = 6 P = 9 P = 12 P = 15 P = 2 P = 6 P = 12 P = 18 P = 20 P Values
Finally, in Figure 4 we have plotted the rate advantage ofvarious subpacketization levels over P = 3 , versus SNR forNetwork 1 (similar results hold for Network 2). It is clearthat higher subpacketization results in better rate advantage,specially at low-SNR.V. C ONCLUSION AND F UTURE W ORK
We provided a new scheme for multi-antenna codedcaching, which enables the subpacketization level to be se-lected freely among a set of predefined values depending onbasic network parameters such as user count, antenna countand global cache ratio. We also proposed a simple efficiencyindex as a performance indicator (in terms of symmetricrate) at various subpacketization levels. Numerical examplesdemonstrate that larger subpacketization generally results in
SNR [dB] R a t e A dv a n t a g e [ % ] P = 15 P = 12 P = 9 P = 6 Fig. 4: Rate Advantage over P = 3 - K = 6 , t = 2 , L = 4 larger efficiency index and higher symmetric rate, while lowsubpacketization incurs significant loss in the achievable rate.Our scheme enables more efficient caching decisions, tailoredto the available computational and power resources.Possible future research directions include finding upperbounds for the achievable symmetric rate for various sub-packetization levels, further simplifying the scheme for largenetworks, and thorough comparison of various beamformerdesign techniques (ZF versus optimized design versus approx-imate solutions). R EFERENCES[1] C. V. N. Index, “Global mobile data traffic forecast update, 2017–2022white paper,”
Cisco: San Jose, CA, USA , 2019.[2] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The roleof proactive caching in 5g wireless networks,”
IEEE CommunicationsMagazine , vol. 52, no. 8, pp. 82–89, 2014.[3] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. Leung, “Cache in theair: Exploiting content caching and delivery techniques for 5g systems,”
IEEE Communications Magazine , vol. 52, no. 2, pp. 131–139, 2014.[4] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
IEEE Transactions on Information Theory , vol. 60, no. 5, pp. 2856–2867, 2014.[5] ——, “Decentralized coded caching attains order-optimal memory-ratetradeoff,”
IEEE/ACM Transactions on Networking (TON) , vol. 23, no. 4,pp. 1029–1040, 2015.[6] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online codedcaching,”
IEEE/ACM Transactions on Networking (TON) , vol. 24, no. 2,pp. 836–845, 2016.[7] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. N. Diggavi,“Hierarchical coded caching,”
IEEE Transactions on Information The-ory , vol. 62, no. 6, pp. 3212–3229, 2016.[8] S. P. Shariatpanahi, S. A. Motahari, and B. H. Khalaj, “Multi-servercoded caching,”
IEEE Transactions on Information Theory , vol. 62,no. 12, pp. 7253–7271, 2016.[9] S. P. Shariatpanahi, G. Caire, and B. H. Khalaj, “Multi-antenna codedcaching,” in . IEEE, 2017, pp. 2113–2117.[10] A. T¨olli, S. P. Shariatpanahi, J. Kaleva, and B. Khalaj, “Multi-antenna interference management for coded caching,” arXiv preprintarXiv:1711.03364 , 2017.[11] A. Tolli, S. P. Shariatpanahi, J. Kaleva, and B. Khalaj, “Multicastbeamformer design for coded caching,” in . IEEE, 2018, pp. 1914–1918.[12] E. Lampiris and P. Elia, “Adding transmitters dramatically boosts coded-caching gains for finite file sizes,”