On the Gap between Scalar and Vector Solutions of Generalized Combination Networks
Hedongliang Liu, Hengjia Wei, Sven Puchinger, Antonia Wachter-Zeh, Moshe Schwartz
aa r X i v : . [ c s . I T ] M a y On the Gap between Scalar and VectorSolutions of Generalized Combination Networks
Hedongliang Liu ∗ , Hengjia Wei † , Sven Puchinger ‡ , Antonia Wachter-Zeh § , and Moshe Schwartz ¶ ∗ Electrical and Computer Engineering, Technical University of Munich, [email protected] † Electrical and Computer Engineering, Ben-Gurion University of the Negev, [email protected] ‡ Applied Mathematics and Computer Science, Technical University of Denmark, [email protected] § Electrical and Computer Engineering, Technical University of Munich, [email protected] ¶ Electrical and Computer Engineering, Ben-Gurion University of the Negev, [email protected]
Abstract —We study scalar-linear and vector-linear solutions tothe generalized combination network. We derive new upper andlower bounds on the maximum number of nodes in the middlelayer, depending on the network parameters. These boundsimprove and extend the parameter range of known bounds.Using these new bounds we present a general lower bound on thegap in the alphabet size between scalar-linear and vector-linearsolutions.
I. I
NTRODUCTION
Network coding has been attracting increasing attentionsince the seminal papers [1], [15] as it increases the com-munication throughput compared to routing. An algebraicformulation for the network coding problem can be foundin [12]. A detailed survey on multicast network coding canbe found in [9].Throughout this paper we consider only linear networks,namely, all the nodes in the network compute linear functions.In a line of recent works, there is a distinction betweenscalar network coding and vector network coding, dependingon whether messages sent along network edges are scalarsor vectors. Vector network coding was mentioned in [5] as fractional network coding and extended to vector networkcoding in [6]. In [18], a network was constructed whoseminimal alphabet for a scalar linear solution is strictly largerthan the minimal alphabet for a vector linear solution, albeit,this gap is just . In [7], a larger gap was found in certaincarefully constructed networks, which was later extended evento minimal networks in [4].The main object we study in this paper is the general-ized combination network . Originally, the (non-generalized)combination networks were first introduced in [16]. It wasshown in [4], [7] that vector linear network coding does notoutperform scalar linear coding schemes in terms of the gap for minimal combination networks. The generalized combinationnetworks were first introduced in [7] and a gap was shown toexist for certain network parameters.The goal of this work is to investigate the gap between theminimum required alphabet size for scalar-linear and vector-linear solutions of generalized combination networks. Our This research was supported by the German Research Foundation (DFG)with a German Israeli Project Cooperation (DIP) under grant no. PE2398/1-1, KR3517/9-1 and by the DFG Emmy Noether Program under grant No.WA3907/1-1. x = ( x , x , . . . , x h ) ∈ F hq x = ( x , x , . . . , x h ) ∈ F htq . . . y y y . . . y N ℓ r middle nodes N = ( r α ) receivers ε α ℓ Figure 1 : Illustration of ( ε , ℓ ) − N h , r , α ℓ + ε networksmain contributions are: we first develop new upper and lowerbounds on the maximal number of nodes in the middle layerof such networks depending on the other parameters of thenetwork. Our new upper bounds are better than a previousbound from [8] in some parameter range and the lower boundscover a wide range of network parameters. We then convertthese bounds to bounds on the minimal alphabet size for alinear solution for many networks. Finally, we derive a lowerbound on the gap for any fixed network structure. To the bestof our knowledge, this is the first lower bound that applies tonearly all generalized combination networks.The rest of this paper is organized as follows. In Section IIwe introduce the concept of generalized combination networksand provide the notation used throughout this paper. In Sec-tion III we give two new upper bounds on the maximumnumber of middle-layer nodes, and in Section IV we give twonew lower bounds on it. In Section V we show the gap betweenthe field sizes of scalar-linear and vector-linear solutions. InSection VI we conclude with a brief discussion of the results.II. P RELIMINARIES
A. The Generalized Combination Network An ( ε , ℓ ) − N h , r , α ℓ + ε generalized combination network isillustrated in Figure 1 (see also [7]). The network has layers.The first layer consists of a source with h source messages. Thesource transmits h messages to r middle nodes via ℓ parallellinks (solid lines) between itself and each middle node. Any α middle nodes in the second layer are connected to a uniquereceiver (again, by ℓ parallel links each). Each receiver is alsoconnected to the source via ε direct links (dashed lines). . Network Coding Solutions In the multicast setting, we have the following codingproblem: for each node in the graph, find functions of itsincoming messages to transmit on its outgoing links, suchthat each receiver can recover all the messages. Such anassignment of functions is called a solution of the network. Ifthese functions are linear, we obtain a linear solution . In linearnetwork coding, each linear function for a receiver consists of coding coefficients for each incoming message. If the messagesare scalars in F q and the coding coefficients are vectors over F q , the solution is called a scalar linear solution , denoted by ( q , 1 ) -linear solution. If the messages are vectors in F tq , andthe coding coefficients are matrices over F q , it is called a vector solution , denoted by ( q , t ) -linear solution.It was shown in [7, Thm. 8] that the ( ε , ℓ ) − N h , r , α ℓ + ε network has a trivial solution if h ℓ + ε and it has no solutionif h > α ℓ + ε . In this paper we focus on the non-triviallysolvable networks, so it is assumed ℓ + ε < h α ℓ + ε throughout the paper. C. The Field Size Gap
The field size of a linear solution is an important parameterthat directly influences the complexity of the calculationsat the network nodes. In order to investigate the advantageof vector solutions in terms of the field size, a metric tomeasure the improvement needs to be specified. We followthe notations from [4] to distinguish between scalar and vectorlinear solutions. Given a network N , let q s ( N ) : = min { q : N has a ( q , 1 ) − linear solution } . The ( q s ( N ) , 1 ) is said to be scalar-optimal . Similarly, let q v ( N ) : = min { q t : N has a ( q , t ) − linear solution } . Note that q v ( N ) is defined by the size of the vector space,rather than the field size. For q t = q v ( N ) , a ( q , t ) -linearsolution is called vector-optimal .By definition, q s ( N ) > q v ( N ) . Small field sizes are prefer-able in practical algorithm designs for network coding [10],[13], [14]. We define the gap as gap ( N ) , log ( q s ( N )) − log ( q v ( N )) , which intuitively measures the advantage of vector networkcoding by the amount of extra bits per link we have to payfor an optimal scalar-linear solution compared to an optimalvector-linear solution. We note that this differs from thedefinition of gap in [4]. D. Codes in the Grassmannian Space
The Grassmannian G ( n , k ) is a set of all subspaces of F nq ofdimension k n . The cardinality of G ( n , k ) is the well-known q -binomial: |G ( n , k ) | = (cid:20) nk (cid:21) q , k − ∏ i = q n − q i q k − q i = k − ∏ i = q n − i − q k − i − where q k ( n − k ) (cid:20) nk (cid:21) q < γ · q k ( n − k ) , (1)with γ ≈ [11, Lemma 4]. Definition 1 (Covering Grassmannian Codes [8]) . An α - ( n , k , δ ) cq covering Grassmannian code C is a subset of G ( n , k ) such that each subset with α codewords of C spans a subspacewhose dimension is at least δ + k in F nq . The following theorem from [8] shows the connectionbetween covering Grassmannian codes and linear networkcoding solutions.
Theorem 1 ( [8, Thm. 4]) . The ( ε , ℓ ) − N h , r , α ℓ + ε network issolvable with a ( q , t ) -linear solution if and only if there existsan α - ( ht , ℓ t , ht − ℓ t − ε t ) cq code with r codewords. III. U
PPER B OUNDS ON THE M IDDLE L AYER
In this section we fix the network parameters α , ℓ , ε , h andwe upper bound the number of nodes in the middle layer, r . Lemma 1.
Let α > , h , ℓ , t > , ε > , h − ε > ℓ , and let T be a collection of subspaces of F ( h − ε ) tq such that (i) each subspace has dimension at most ℓ t ; and (ii) any subset of α subspaces spans F ( h − ε ) tq .Then we have α ℓ > h − ε and |T | (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19) + (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19)(cid:20) ℓ t + (cid:21) . Proof:
Take arbitrarily ⌊ h − ε ℓ ⌋ − subspaces from T andtake arbitrarily a subspace W of dimension ( h − ε ) t − ℓ t − which contains all these ⌊ h − ε ℓ ⌋ − subspaces. Then for anysubspace T ∈ T , there is a hyperplane of F ( h − ε ) tq containingboth W and T . Note that there are [ ℓ t + ℓ t ] = [ ℓ t + ] hyperplanescontaining W and each of them contains at most α − subspaces from T . Thus |T | (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19) + (cid:20) ℓ t + ℓ t (cid:21)(cid:18) α − − (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19)(cid:19) = (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19) + (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19)(cid:20) ℓ t + (cid:21) . Theorem 2.
Let α > , h , ℓ , t > , ε > , h − ε > ℓ , and let S be a collection of subspaces of F htq such that (i) each subspace has dimension at most ℓ t ; and (ii) any subset of α subspaces spans a subspace of dimensionat least ( h − ε ) t .hen we have α ℓ > h − ε and |S| (cid:20) ( ε + ℓ ) t ε t (cid:21) (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19) q ℓ t + − q − − ! + (cid:22) h − ε ℓ (cid:23) − ( ∗ ) < γ (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19) q ℓ t ( ε t + ) + (cid:22) h − ε ℓ (cid:23) − Proof:
Take arbitrarily j h − ε ℓ k − subspaces from S anda subspace W ⊂ F htq of dimension ( h − ε ) t − ℓ t such that W contains all these j h − ε ℓ k − subspaces. Then for any subspace S ∈ S there is a subspace of dimension ( h − ε ) t containingboth W and S .Let m , [ ( ε + ℓ ) t ε t ] . Then there are m subspaces of dimension ( h − ε ) t containing W , say W , W , . . . , W m . Note that every α subspaces in W i ∩ S span the subspace W i . According toLemma 1, we have | W i ∩ S| (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19) + (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19)(cid:20) ℓ t + (cid:21) . Hence, |S| m ∑ i = (cid:18) | W i ∩ S| − (cid:18)(cid:22) h − ε ℓ (cid:23) − (cid:19)(cid:19) + (cid:22) h − ε ℓ (cid:23) − (cid:20) ( ε + ℓ ) t ε t (cid:21) (cid:18) α − (cid:22) h − ε ℓ (cid:23) + (cid:19) q ℓ t + − q − − ! + (cid:22) h − ε ℓ (cid:23) − The inequality ( ∗ ) is derived by (1).The following corollary rephrases Theorem 2 with networkparameters. Corollary 1.
Let α > , h , ℓ , t > , ε > , and h − ε > ℓ . If ( ε , ℓ ) − N h , r , α ℓ + ε has a ( q , t ) -linear solution then r < γθ q ℓ t ( ε t + ) + α − θ , where θ , α − j h − ε ℓ k + .Proof: If a ( q , 1 ) -linear solution exists, then each of the r nodes in the middle layer gets a subspace of dimension ℓ t of the source messages space. Since all receivers are able torecover the entire source message space, every α -subset of themiddle nodes span a space of dimension at least ( h − ε ) t . Wethen use Theorem 2.Theorem 2 and Corollary 1 are valid for all α > . However,we derive a better upper bound for α = , as shown in thefollowing theorem. Theorem 3.
Let h , ℓ , t > , ε > , and let S be a collectionof subspaces of F htq such that (i) each subspace has dimension at most ℓ t ; and (ii) the sum of any two subspaces has dimension at least ( h − ε ) t . Then we have |S| [ ht ℓ t − ( h − ε ) t + ][ ℓ t ℓ t − ( h − ε ) t + ] γ · q ( h − ℓ )( ℓ + ε − h ) t +( h − ℓ ) t . Proof:
We may assume that each subspace has dimension ℓ t . Since the sum of every two subspaces has dimension atleast ( h − ε ) t , then their intersection has dimension at most ℓ t − ( h − ε ) t . It follows that any subspace of dimension ℓ t − ( h − ε ) t + is contained in at most one subspace of S . Note that there are [ ht ℓ t − ( h − ε ) t + ] subspaces of dimension ℓ t − ( h − ε ) t + and each subspace of dimension ℓ t contains [ ℓ t ℓ t − ( h − ε ) t + ] such spaces. We have that |S| (cid:20) ht ℓ t − ( h − ε ) t + (cid:21) / (cid:20) ℓ t ℓ t − ( h − ε ) t + (cid:21) . IV. L
OWER B OUNDS ON THE M IDDLE L AYER N ODES
We now turn to study a lower bound on the number ofnodes in the middle layer, when we fix the network parameters α , ℓ , ε , h . The main results are summarized in Theorem 4 andCorollary 2. In the following, we first give the condition onthe coding coefficients under which a linear solution exists.Let x , . . . , x h ∈ F tq denote the h source messages and y , . . . , y N ∈ F ( ε + α ℓ ) tq the messages received by each re-ceiver . Since each middle-layer node receives ℓ incomingedges, and has ℓ outgoing edges directed at a given receiver,we may assume without loss of generality that this node justforwards its incoming messages. Let us denote the coding co-efficients used by the source node for the messages transmittedto the r middle nodes by A , . . . , A r ∈ F ℓ t × htq . Additionally,we denote the coding coefficients used by the source nodefor the messages transmitted directly to the receivers by B , . . . , B N ∈ F ε t × htq .Each receiver has to solve the following linear system ofequations (LSE): y i = A i ... A i α B i ( ε + α ℓ ) t × ht · x ... x h ht × , ∀ i =
1, . . . , N = (cid:18) r α (cid:19) , where { A i , . . . , A i α } ⊂ { A , . . . , A r } .Any receiver can recover the h source messages x , . . . , x h if and only if rank A i ... A i α α ℓ t × ht > ( h − ε ) t , ∀ i =
1, . . . , N . (2)Here the solution of the ( ε , ℓ ) − N h , r , α ℓ + ε network is a setof the coding coefficients { A , . . . , A r } s.t. (2) holds (where B , . . . , B N may be easily determined from the solution). The vector y i is the concatenation of all the messages received by the i threceiver node. . A Lower Bound by the Lov´asz-Local Lemma Lemma 2 (The Lov´asz-Local-Lemma [2, Ch. 5] [3]) . Let E , E , . . . , E k be a sequence of events. Each event occurs withprobability at most p and each event is independent of all theother events except for at most d of them. If epd , thenthere is a non-zero probability that none of the events occurs. We choose the matrices A , . . . , A r ∈ F ℓ t × htq independentlyand uniformly at random. For i < · · · < i α r , wedefine the event E i ,..., i α , ( A i , . . . , A i α ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) rank A i ... A i α < ( h − ε ) t . Let p = Pr ( E i ,..., i α ) and denote by d the number of otherevents E i ′ ,..., i ′ α that are dependent on E i ,..., i α . Lemma 3.
Let α > , h , ℓ , t > , ε > . Fixing i < · · · < i α r , we have Pr ( E i ,..., i α ) γ · q ( h − α ℓ − ε ) ε t +( h − α ℓ − ε ) t − . Proof:
The number of matrices A ∈ F m × nq of rank s is M ( m , n , s ) , s − ∏ j = ( q m − q j )( q n − q j ) q s − q j γ · q ( m + n ) s − s . (3)Then, Pr ( E i ,..., i α ) = ( h − ε ) t − ∑ i = M ( α ℓ t , ht , i ) q α ℓ ht ( h − ε ) t − ∑ i = γ · q ( h + α ℓ ) ti − i q α ℓ ht (4) γ · qq − · q max i { ( h + α ℓ ) ti − i }− α ℓ ht (5) = γ · qq − · q ( h + α ℓ ) ti − i | i =( h − ε ) t − − α ℓ ht (6) γ · · q ( h − α ℓ − ε ) ε t +( h − α ℓ − ε ) t − (7)where (4) holds due to (3), (5) follows from a geometric sum,and (6) follows by maximizing ( h + α ℓ ) ti − i . Lemma 4.
Let α > , h , ℓ , t > , ε > . Fixing i < · · · < i α r , the event E i ,..., i α is statistically independent ofall the other events E i ′ ,..., i ′ α ( i ′ < · · · < i ′ α r ), exceptfor at most α ( r − α − ) of them.Proof: For i < · · · < i α r and i ′ < · · · < i ′ α r , the events E i ,..., i α and E i ′ ,..., i ′ α are statisticallyindependent if and only if { i , . . . , i α } ∩ { i ′ , . . . , i ′ α } = ∅ .Thus, having chosen i < · · · < i α r , there are at most α ( r − α − ) ways of choosing an independent event. Remark 1.
Lemma is a union-bound argument on thenumber of dependent events. The exact number is ( r α ) − ( r − αα ) . However the exact expression makes it harder to resolveeverything for r later so we use the bound here. Theorem 4.
Let α > , ε > , ℓ , t > , and h α ℓ + ε be fixed integers. If r β · q f ( t ) α − , where β , (cid:16) ( α − ) !2 e γα (cid:17) α − and f ( t ) , ( α ℓ + ε − h ) ε t + ( α ℓ + ε − h ) t + , then ( ε , ℓ ) −N h , r , α ℓ + ε has a ( q , t ) -linear solution.Proof: By the Lov´asz Local Lemma, it suffices to showthat epd . Noting that d α ( r − α − ) α · ( r − ) α − ( α − ) ! , we shallrequire e · γ q ( h − α ℓ − ε ) ε t +( h − α ℓ − ε ) t − · α ( r − ) α − ( α − ) ! Namely, r β · q ( α ℓ + ε − h ) εα − t + α ℓ + ε − h α − t + α − + We omit theplus one for simplicity.
B. A Lower Bound by α -Covering Grassmannian Codes Let B q ( n , k , δ ; α ) denote the maximum possible size of an α - ( n , k , δ ) cq covering Grassmannian code.Let A be a k × ( n − k ) matrix, and let I k be a k × k identitymatrix. The matrix [ I k A ] can be viewed as a generator matrixof a k -dimensional subspace of F nq , and it is called the lifting of A . When all the codewords of an MRD code are lifted to k -dimensional subspaces, the result is called lifted MRD code ,denoted by C MRD . Theorem 5.
Let n , k , δ and α be positive integers such that δ k , δ + k n and α > . Then B q ( n , k , δ ; α ) > ( α − ) q max { k , n − k } ( min { k , n − k }− δ + ) . Proof:
Let m = n − k and K = max { m , n − m } ( min { m , n − m } − δ + ) . Since δ min { m , n − m } ,an [ m × ( n − m ) , K , δ ] q MRD code C exists. Let C MRD bethe lifted code of C . Then C MRD is a subspace code of F nq ,which contains q K m -dimensional subspaces as codewords andits minimum subspace distance is δ [17]. Hence, for any twodistinct C , C ∈ C MRD we have dim ( C ∩ C ) m − δ .Now, let D = (cid:8) C ⊥ (cid:12)(cid:12) C ∈ C MRD (cid:9) . Take α − copies of D and denote their multiset union as D ( α ) . We claim that D ( α ) is an α - ( n , k , δ ) cq covering Grassmannian code. For eachcodeword of D ( α ) , since it is the dual of a codeword in C MRD ,it has dimension n − m , which is k . For arbitrarily α codewords D , D , . . . , D α of D ( α ) , there exist i < j α such that D i = D j . Let C i = D ⊥ i and C j = D ⊥ j . Then C i and C j aretwo distinct codewords of C MRD . It follows that dim α ∑ ℓ = D ℓ ! > dim (cid:0) D i + D j (cid:1) = n − dim (cid:16) D ⊥ i ∩ D ⊥ j (cid:17) = n − dim (cid:0) C i ∩ C j (cid:1) > n − m + δ = k + δ . So far we have shown that D ( α ) is an α - ( n , k , δ ) cq coveringGrassmannian code. Then the conclusion follows since | D ( α ) | =( α − ) | D | = ( α − ) | C MRD | =( α − ) q max { k , n − k } ( min { k , n − k }− δ + ) . s a consequence, we have the following: Corollary 2.
Let α > , h , ℓ , t > , ε > , h ℓ + ε . If r ( α − ) q g ( t ) , where g ( t ) , max { ℓ t , ( h − ℓ ) t }· ( min { ℓ t , ( h − ℓ ) t } − ( h − ℓ − ε ) t + )= ( ℓ ε t + ℓ t h ℓ , ( h − ℓ )( ℓ + ε − h ) t + ( h − ℓ ) t otherwise.then ( ε , ℓ ) − N h , r , α ℓ + ε has a ( q , t ) -linear solution. Note that Theorem 4 and Corollary 2 are both sufficientconditions on r s.t. a solution exists for ( ε , ℓ ) − N h , r , α ℓ + ε . Thusthey can be regarded as lower bounds on maximum numberof nodes in the middle layer.V. B OUNDS ON THE F IELD S IZE G AP In previous sections, we discussed bounds on the maximumnumber of nodes in the middle layer. To discuss gap ( N ) , wefirst need the following conditions on the smallest field size q s ( N ) or q v ( N ) , for which a network N is solvable. Lemma 5.
Let α > , r , h , ℓ , t > , ε > . If ( ε , ℓ ) − N h , r , α ℓ + ε has a ( q , t ) -linear solution then q t > (cid:16) r + θ − αγ · θ (cid:17) ℓ ( ε t + ) h > ℓ + ε , (cid:16) r γ ( α − ) (cid:17) ℓ ( ε t + ) otherwise,where θ , α − j h − ε ℓ k + and γ ≈ .Proof: It follows from Corollary 1 that for h > ℓ + ε , q t > (cid:16) r + θ − αγ · θ (cid:17) ℓ ( ε t + ) , so the first case follows. The secondcase may be derived from [8] in a similar manner. Lemma 6.
Let α > , r , h , ℓ , t > , ε > . There exists a ( q , t ) -linear solution to ( ε , ℓ ) − N h , r , α ℓ + ε when q t > (cid:16) r β (cid:17) ( α − ) tf ( t ) h > ℓ + ε (cid:0) r α − (cid:1) tg ( t ) otherwise, (8) where β and f ( t ) are defined as in Theorem , and g ( t ) isdefined as in Corollary .Proof: The proof is similar to that in Lemma 5 and thecases follow from Theorem 4 and Corollary 2 respectively.Lemma 5 and Lemma 6 can be seen as the necessary andthe sufficient conditions respectively on the pair ( q , t ) s.t. a ( q , t ) -linear solution exists.In the following, we use the lemmas above to derive a lowerbound on the gap ( N ) for a given network N . The bound isdetermined only by the network parameters. Theorem 6.
Let α > , r , h , ℓ > , ε > . Then for the ( ε , ℓ ) − N h , r , α ℓ + ε network, gap ( N ) > ℓ ( ε + ) log (cid:16) r + θ − αγθ (cid:17) − t ∆ h > ℓ + ε ℓ ( ε + ) log (cid:16) r γ ( α − ) (cid:17) − t ⋆ otherwise , where t ∆ is the smallest positive integer s.t. f ( t ∆ ) α − > r β and t ⋆ is the smallest positive integer s.t. g ( t ⋆ ) > r α − . Here, β and f ( t ) are defined as in Theorem , and g ( t ) is defined asin Corollary .Proof: Let us first consider the first case h > ℓ + ε .According to Lemma 5, we have the lower bound on the small-est field size of a scalar solution, q s ( N ) > (cid:16) r + θ − αγ · θ (cid:17) ℓ ( ε + ) ,For vector solutions, according to Lemma 6, we want to find ( q , t ) s.t. q f ( t ) α − > r β . Since t ∆ is the smallest positive integer t s.t. f ( t ) α − > r β , it is guaranteed that a ( t ∆ ) -linear solutionexists. Therefore, q v ( N ) (the smallest value of q t ) should beat most q v ( N ) t ∆ . The lower bound then follows directlyfrom the definition of gap ( N ) . The other case can be provedin the same manner.By carefully bounding t ⋆ and t ∆ , the following is obtained: Corollary 3.
Let α > , r , h , ℓ , ε > . Then for the ( ε , ℓ ) −N h , r , α ℓ + ε network, gap ( N ) > log ( r α − ) − ℓ ( ε + ) − q log ( r α − ) ℓ ε h ℓ + ε , log (cid:16) r + θ − αγθ (cid:17) ℓ ( ε + ) − r ( α − ) log ( r β )( α ℓ + ε − h ) ε otherwise.In particular, if all parameters are constants except for r → ∞ ,then gap ( N ) = Ω ( log r ) . VI. D
ISCUSSION
In this work, we studied necessary and sufficient conditionsfor the existence of ( q , t ) -linear solutions to the generalizedcombination network. The derived conditions led us to find alower bound on the gap for almost all network parameters.Unlike previous works, e.g., [4], [7], which were focusedon engineering specific networks with a high gap, we startwith almost any given network, and provide an expressionfor its gap. It is of particular interest to note the implicationsof Corollary 3. Fixing the number of messages, and parametersrelating to the connectivity level of the network, we onlyvary the number of middle layer nodes, r , or equivalently,the number of receivers N , ( r α ) . Corollary 3 then shows thatthe gap is Ω ( log r ) = Ω ( log N ) , namely, that scalar-linearsolutions over-pay an order of log ( r ) extra bits per link tosolve the network, in comparison with vector-linear solutions.Our bounds, however, are weak in the case of no direct links,i.e., ε = , and improving them is left for future research.In the full version of this paper we further study the boundspresented here, and compare them with the other knownbounds of [8]. EFERENCES[1] R. Ahlswede, Ning Cai, S. . R. Li, and R. W. Yeung, “Networkinformation flow,”
IEEE Transactions on Information Theory , vol. 46,no. 4, pp. 1204–1216, July 2000.[2] N. Alon and J. H. Spencer,
The Probabilistic Method , 3rd ed. WileyPublishing, 2008.[3] J. Beck, “An algorithmic approach to the Lovsz local lemma. I,”
RandomStructures and Algorithms , vol. 2, no. 4, pp. 343–365, 1 1991.[4] H. Cai, T. Etzion, M. Schwartz, and A. Wachter-Zeh, “Network codingsolutions for the combination network and its subgraphs,” in
Proceedingsof the 2019 IEEE International Symposium on Information Theory(ISIT2019), Paris, France , Jul. 2019, pp. 862–866.[5] J. Cannons, R. Dougherty, C. Freiling, and K. Zeger, “Network routingcapacity,”
IEEE Transactions on Information Theory , vol. 52, no. 3, pp.777–788, March 2006.[6] J. B. Ebrahimi and C. Fragouli, “Algebraic algorithms for vector networkcoding,”
IEEE Trans. Inform. Theory , vol. 57, no. 2, pp. 996–1007, Feb.2011.[7] T. Etzion and A. Wachter-Zeh, “Vector network coding based onsubspace codes outperforms scalar linear network coding,”
IEEE Trans-actions on Information Theory , vol. 64, no. 4, pp. 2460–2473, April2018.[8] T. Etzion and H. Zhang, “Grassmannian codes with new distance mea-sures for network coding,”
IEEE Transactions on Information Theory ,vol. 65, no. 7, pp. 4131–4142, July 2019.[9] C. Fragouli and E. Soljanin, “(Secure) linear network coding multicast– a theoretical minimum and some open problems,”
Designs, Codes andCryptography , vol. 78, no. 1, pp. 269–310, 2016.[10] D. Gonc¸alves, S. Signorello, F. M. Ramos, and M. M´edard, “Randomlinear network coding on programmable switches,” in . IEEE, 2019.[11] R. Koetter and F. R. Kschischang, “Coding for errors and erasures inrandom network coding,”
IEEE Trans. Inf. Theor. , vol. 54, no. 8, pp.3579–3591, Aug. 2008.[12] R. Koetter and M. M´edard, “An algebraic approach to network coding,”
IEEE/ACM Transactions on Networking , vol. 11, no. 5, pp. 782–795,Oct 2003.[13] M. Langberg and A. Sprintson, “Recent Results on the AlgorithmicComplexity of Network Coding,” in
Proc. of the 5th Workshop onNetwork Coding, Theory, and Applications , 2009.[14] M. Langberg, A. Sprintson, and J. Bruck, “The encoding complexityof network coding,”
IEEE Transactions on Information Theory , vol. 52,no. 6, pp. 2386–2397, 2006.[15] S. Y. R. Li, R. W. Yeung, and N. Cai, “Linear network coding,”
IEEETrans. Inform. Theory , vol. 49, no. 2, pp. 371–381, Feb. 2003.[16] S. Riis and R. Ahlswede, “Problems in network coding and errorcorrecting codes,”
Lecture Notes in Computer Science , vol. 4123, pp.861–897, 2006.[17] D. Silva, F. R. Kschischang, and R. Koetter, “A rank-metric approachto error control in random network coding,”
IEEE Transactions onInformation Theory , vol. 54, no. 9, pp. 3951–3967, Sep. 2008.[18] Q. Sun, X. Yang, K. Long, X. Yin, and Z. Li, “On vector linear solv-ability of multicast networks,”