Rateless Codes for Single-Server Streaming to Diverse Users
aa r X i v : . [ c s . I T ] J a n Rateless Codes for Single-Server Streaming toDiverse Users
Yao Li
ECE Department, Rutgers UniversityPiscataway NJ [email protected]
Emina Soljanin
Bell Labs, Alcatel-LucentMurray Hill NJ 07974, [email protected]
Abstract — We investigate the performance of ratelesscodes for single-server streaming to diverse users, assumingthat diversity in users is present not only because they havedifferent channel conditions, but also because they demanddifferent amounts of information and have different de-coding capabilities. The LT encoding scheme is employed.While some users accept output symbols of all degrees anddecode using belief propagation, others only collect degree-1 output symbols and run no decoding algorithm. Wepropose several performance measures, and optimize theperformance of the rateless code used at the server throughthe design of the code degree distribution. Optimizationproblems are formulated for the asymptotic regime andsolved as linear programming problems. Optimized per-formance shows great improvement in total bandwidthconsumption over using the conventional ideal solitondistribution, or simply sending separately encoded streamsto different types of user nodes. Simulation experimentsconfirm the usability of the optimization results obtainedfor the asymptotic regime as a guideline for finite-lengthcode design.
I. I
NTRODUCTION
A. Motivation
Growing popularity of ubiquitous computing, alongwith the surging demand for digital media distributionservices such as YouTube TM , has brought up the issueof efficient media sharing in a heterogenous networkcomposed of links of diverse quality as well as terminalsof varied computing power and demand of media quality.Consider the air broadcast of digital TV streams. Aspecialized “plugged” receptor, such as an HDTV set athome, may have more computing power than a smallportable device, such as a cellphone, and hence theformer might be able to perform more complex decodingalgorithms than the latter. Meanwhile, the quality ofthe broadcast channels may vary due to the location ofthe receiver, indoors or outdoors, near or far from thetransmitting tower. Moreover, devices may need differentamounts of data to display a video stream according toscreen resolutions. This work was supported by the NSF grant No. CNS 0721888.
Here, we are interested in finding some efficient andyet fair way to provide multicast streaming service to allor a majority of the receivers bearing such heterogeneity.One straightforward solution is to transmit separatelyencoded data streams suitable for different devices andchannels simultaneously, but this requires extra band-width and is hence less than efficient.Rateless codes [1], [2] are, roughly speaking, designedfor erasure channels in a way that the set of informationsymbols may be recovered from any subset of the en-coding symbols of size equal or slightly larger than thatof the information symbol set by simple decoding. Thefirst practical rateless codes, LT codes, were invented byMichael Luby and published in 2002 [1]. Another classof rateless codes are Raptor codes, a version of whichhas been written into the 3GPP standard for MultimediaBroadcast/Multicast Service [3].Rateless codes have the nice features of requiringminimal feedback from the receiver to the sender andoperating well over a range of channel conditions.These features are particularly suitable for the broad-cast/multicast scenarios. We investigate the possibilityof simultaneously serving data sinks of highly heteroge-nous decoding capabilities and non-uniform demandof information on channels of diverse quality, with asingle rateless coded multicast stream from the source.Specifically, we study the design of LT codes for themulticast streaming purpose.
B. Related Work
The performance of LT codes is determined by thedegree distribution of encoding/output symbols. In [1]and [2], the ideal soliton and robust soliton degreedistributions have been proposed for minimizing theoverhead necessary for recovering all input symbols.However, using these degree distributions when thenumber of output symbols collected by the receiver issmaller than the total number of the input symbols resultsin recovery of few input symbols. In [4], the optimalegree distributions for recovering a constant fraction ofthe input symbols from the smallest number of outputsymbols have been studied.Our work considers multicast streaming to all usernodes with a single data stream. We deal with simul-taneous multiple heterogeneities such as link diversity,difference in coding capabilities (e.g., due to limitationsin computing resources), and difference volume of in-formation demand (e.g., low or high resolution video).We are interested in performance measures reflecting thecollective properties of all the sink nodes of interest,such as maximum and average latency. Our approach bydesigningOur paper is organized as follows: Section II intro-duces the system model for the heterogeneous multi-casting network. Section III outlines the guidelines forour optimization problems in the asymptotic regime.Section IV proposes several performance measures andstates the corresponding optimization problems. SectionV presents the optimization results of the problemsformulated in section IV. Section VI contains finite-length simulation results.II. S
YSTEM M ODEL : M
ULTICAST O VER
BECC
HANNELS
We consider a streaming network consisting of asingle server (source node) and n users (sink nodes)each directly connected to the server by a BEC channel,as shown in Figure 1. The source holds k information HDTV: high resolution, abundant power supplyCell Phone : low resolution, low power Handheld Media Player: medium resolution, low power ),,( cz ),,( cz ),,( cz k information symbols Fig. 1. Broadcast/Multicast System Model symbols and broadcasts a rateless coded stream to all n sinks. The rateless encoder is an LT encoder[1] withdegree distribution with moment generating function P ( k ) ( x ) = p ( k )1 x + p ( k )2 x + · · · + p ( k ) k x k . (1)The LT encoder generates potentially an infinite numberof output symbols and broadcast the output stream alongall BEC links. There are two types of sink nodes which differ inthe way the LT code is decoded. One type of sinksuse the belief propagation (BP) algorithm [1] to recoverthe input symbols from the received output symbols,while the other type of sinks only accept and recoverinformation from degree-1 output symbols received fromthe source. The first type are referred to as decoding,and the second as non-decoding sinks. When multipledescription [5] encoded, the information symbols allowfor tiered reconstruction qualities of the original sourceinformation at the sinks.Sinks are sorted into ≤ l ≤ n clusters, each clustercomprising n i ( i = 1 , , . . . , l ) sinks. n = P li =1 n i . Asink in cluster i is characterized by a tuple ( z i , c i , ǫ i )( i = 1 , , . . . , l ) . z i is a real constant in [0 , indicatingthe fraction of input symbols that sinks in cluster i expectto recover. z i could be related to the target distortion atthe sinks. The two types of sink nodes are distinguishedby c i = { cluster i is decoding } . ǫ i is the erasure rateof the BEC channels that link the source node to thesink nodes in cluster i . Depending on the performancemeasure, sinks in the same cluster can often be treatedas one single sink because the tuples fully characterizetheir decoding behavior in this broadcasting scenario.III. T HE O PTIMIZATION P ROBLEM IN THE A SYMPTOTIC R EGIME
The decoding process of LT codes starts with simplyrecovering the input symbols connected to the receivedoutput symbols of degree-1. This initial recovery inducesa new set of output symbols of degree-1. The decodingcan continue in the same manner as long as thereare output symbols of induced degree-1. Such symbolsconstitute what is known as the ripple . The decodingprocess halts when the ripple becomes empty. In [2], [6]and [7], the expected size of the ripple throughout of thedecoding process is given as a function of the numberof unrecovered information symbols. We restate here thepart of Theorem 2 in [7] that concerns the expected sizeof the ripple.Assume w · k output symbols have been collectedand can be used for decoding of an LT code, for somepositive constant w . Let u · k be the number of unrecov-ered information symbols, for a constant u ∈ [0 , . Let r ( k ) ( u ) be the expected size of the ripple, normalized by k . Theorem 1: (Maatouk and Shokrollahi [7, Thm. 2])If an LT code of k information symbols has degreedistribution specified by the moment generating function P ( k ) ( x ) (see (1)), then r ( k ) ( u ) = wu (cid:16) P ( k ) ′ (1 − u ) + 1 w ln u (cid:17) + O (cid:16) k (cid:17) , (2)here P ( k ) ′ ( x ) stands for the first derivative of P ( k ) ( x ) with respect to x .The original theorem in [7] is stated for the case wherethe number of output symbols collected by the receiveris more than the total number of information symbols,i.e., w > . However, the proof suggests that the theoremalso holds for any constant w < .Assume that P ( k ) ( x ) converges to P ( x ) = P i ≥ p i x i as k → ∞ ; then we have r ( u ) = lim k →∞ r ( k ) ( u ) = u (cid:0) wP ′ (1 − u ) + ln u (cid:1) . (3)In order for the decoding process to carry on untilat least a fraction z of the information symbols couldbe recovered, the ripple size has to be kept positive. Ifwe use the expected value to roughly estimate the ripplesize, we should have r ( u ) = u (cid:0) wP ′ (1 − u ) + ln u (cid:1) > , ∀ u ∈ (1 − z, , or equivalently, wP ′ (1 − u ) + ln u > , ∀ u ∈ (1 − z, , (4)Inequality (4) provides a guideline for the design of thedegree distribution P ( x ) .It is interesting to consider the implications of in-equality (4) on w and z relationship when the degreedistribution is p = 1 , that is, all output symbols areof degree 1. Then (4) should tell us how many (on theaverage) output symbols of degree 1 we need in order torecover fraction z of the information symbols. Note thatwhen p = 1 we have P ( x ) = x and P ′ ( x ) = 1 , and inturn from (4), we have w + ln u > , ∀ u ∈ (1 − z, .Thus, w ≥ − ln(1 − z ) , and consequently, the optimalvalue of w is − ln(1 − z ) .Note that we would get the same result if we triedto answer the question about w and z by using thecoupon collecting problem, also known as the urns-and-balls problem. Throw a number of balls into k urns.Each ball is thrown independently and falls into eachurn with equal probability. What is the number of balls N needed for the number of urns containing at least oneball to reach s ? Note that N is a random variable. It hasbeen derived in [8, Ch. 2] (see also [9]) that the expectednumber of N is E [ N ] = k (cid:16) k + 1 k − · · · + 1 k − s + 1 (cid:17) ' k ln kk − s + 1 = − k ln (cid:16) − s − k (cid:17) . Set z = s/k , the portion of urns possessing at least oneball. Then, as k → ∞ , E [ N ] → − k ln(1 − z ) .Now, assume that the number of collected outputsymbols of the LT code specified in Theorem 1 is W · k , where W is a random variable with mean ω , and denotethe normalized expected ripple size as k → ∞ as r W ( u ) ,then Corollary 2: r W ( u ) = u (cid:16) ωP ′ (1 − u ) + ln u (cid:17) . (5) Proof:
This is due to the linearity of the expectedripple size in W for given u and P . r W ( u ) = E (cid:20) W u (cid:16) P ′ (1 − u ) + 1 W ln u (cid:17)(cid:21) = u (cid:16) E [ W ] P ′ (1 − u ) + ln u (cid:17) = u (cid:16) ωP ′ (1 − u ) + ln u (cid:17) Then, from 3, we have the recovery condition forrandom W with mean ωωP ′ (1 − u ) + ln u > , ∀ u ∈ (1 − z, , (6)In the next section, we shall use (4) to formulate ouroptimization problems for LT code degree distributiondesign.IV. P ERFORMANCE M EASURES AND T HEIR O PTIMIZATION P ROBLEM S TATEMENTS
Recall from Section II tuples ( z i , c i , ǫ i ) , i = 1 , , . . . , l are used to characterize the l sink clusters in the stream-ing network. Let t i · k be the number of output symbolstransmitted by the source up till the time when the sinksin cluster i are able to recover their targeted fraction z i of the input symbols. Then, the normalized number ofsymbols a sink in cluster i receives has mean t i (1 − ǫ ) .If cluster i is decoding( c i = 1 ), then let x = 1 − u in(6); we have (1 − ǫ i ) t i P ′ ( x ) + ln(1 − x ) > , ∀ x ∈ [0 , z i ) . (7)A non-decoding user recovering information from arateless coded stream of degree distribution specifiedby P ( x ) is equivalent to a decoding user recoveringinformation from a coded stream of degree distributionspecified by P ( x ) = (1 − P ′ (0)) + P ′ (0) x .If cluster i is non-decoding ( c i = 0 ), then let p = P ′ (0) , the fraction of degree-1 symbols and we have (1 − ǫ i ) t i p + ln(1 − x ) > , ∀ x ∈ [0 , z i ) . (8)The monotonicity and continuity of the ln functionsimplify (8) to (1 − ǫ i ) t i p + ln(1 − z i ) ≥ . (9) ) Min-Max Latency: In the interest of the trans-mitting source, we wish to minimize the transmissiontime that could guarantee the recovery of targeted ( z , z , . . . , z l ) fractions of input symbols by the l sink clusters. In addition, for broadcasting time-sensitivestreaming data, new data await to be transmitted afterthe transmission of an older block of data is finished.Minimizing the maximum latency is especially importantfor keeping the entire communications scheme in pace.This optimization problem could be expressed asfollows: min. P max i t i (10)s.t. t i (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) > , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,t i (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l, or equivalently, min. P,t t (11)s.t. t (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) > , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,t (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l. Let t ∗ ( z , z , . . . , z l ) be the optimal solution to Prob-lem (11). Then the achievable information recoveryregion for transmission of t · k output symbols is givenby Z ( t ) = { ( z , z , . . . , z l ) : t ∗ ( z , z , . . . , z l ) ≤ t,z i ∈ [0 , , i = 1 , , . . . , l } . As we will see in the next section, optimization resultsshow that, when there are two decoding clusters in thenetwork, one with perfect link conditions and the otherwith erasure rate ǫ = 0 . , after the source has transmitted . k output symbols, the cluster with worse channels canrecover of the input symbols in the mean time whenthe cluster with perfect channels can recover . If thesource uses ideal soliton or robust soliton distributions,however, the cluster with worse channels may hardlyrecover anything until about k output symbols havebeen transmitted. Similar results can be seen for caseswhere there is one decoding cluster and a non-decodingcluster present in the network. b) Max-Min Channel Utilization: The Shannon ca-pacity of the BEC link to sink cluster i is (1 − ǫ i ) bits perchannel use. The channel utilization of a link to cluster i is then v i = z i (1 − ǫ i ) t i . We wish to maximize the minimum channel utilization on all links, which is equivalent tominimizing the inverse of the channel utilization. min. P max i t i (1 − ǫ i ) z i (12)s.t. t i (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) ≥ , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,t i (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l, or equivalently, min. P,v v (13)s.t. v z i P ′ ( x ) + ln(1 − x ) ≥ , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,v z i p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l. Maximizing the min channel utilization proves to beirrelevant to the channel conditions, as may be inferredfrom the expression of Problem (13). As we will see inthe next section, high minimum channel utilization couldbe achieved when the decoding cluster has either a verylow or a very high demand. The increase in the demandof the non-decoding cluster, on the other hand, alwaysdegrades channel utilization. c) Max-Min Throughput:
The throughput at eachsink cluster i may be defined as z i t i . It is of interest tomeasure the objective channel degradation regardless ofchannel capacity so as to provide reference for servicepricing of the broadcast application. We wish to maxi-mize the minimum throughput of all sink clusters. Thisis equivalent to minimizing the maximum of the inverseof the throughput. The optimization problem is thereforeexpressed as Problem (14): min. P max i t i z i (14)s.t. t i (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) ≥ , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,t i (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l, or equivalently, min. P,w w (15)s.t. w z i (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) ≥ , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,w z i (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l. ) Minimum Average Latency: We are also inter-ested in minimizing the average latency of all sinks. Thisis a natural measure of overall performance.min. P P li =1 n i t i n (16)s.t. t i (1 − ǫ i ) P ′ ( x ) + ln(1 − x ) ≥ , ≤ x ≤ z i , if cluster i is decoding , i = 1 , . . . , l,t i (1 − ǫ i ) p + ln(1 − z i ) ≥ , if cluster i is non-decoding , i = 1 , . . . , l. Optimization results show that, when all channelsare perfect and half of the sinks are decoding, halfnon-decoding, the optimized achievable average latencywith one single broadcast data stream is mostly worsethan broadcasting on separate channels data streamsindividually optimized for different sinks. Details arepresented in the Section V.Since our objectives are the minimization of increas-ing functions of the latencies, with arguments similarto Lemma 2 of [4], we can claim that there must existoptimal solutions to Problems (11), (13), (15) and (16)with polynomials P ( x ) of degree no higher than d max = ⌈ − max i { z i } ⌉− . This promises the ready conversion ofProblems (11), (13) and (15) into linear programmingproblems by the method proposed in [4]. Problem (16)may be converted to a series of linear programmingproblems for fixed p ∈ [0 , when there are onlytwo sink clusters in the network, one decoding and theother non-decoding. To solve the linear programmingproblems numerically, the parameter x in the constraintsis evaluated at discrete points and lower bounds for theminimization problems with constraints continuous in x are obtained. In the next section we will show in detailthe optimization results of these problems.V. O PTIMIZATION R ESULTS
A. Application to 2-Cluster Situations
Now we apply our optimization problem to the casewhere only two sink clusters with distinct tuple charac-teristics, ( z , c , ǫ ) and ( z , c , ǫ ) exist. We deal with:(1) c = c = 1 , ǫ = 0 , ǫ = 0 . : both clusters aredecoding, but with diverse channel conditions; (2) onecluster is decoding while the other is not, with equal ordiverse channel qualities.Figure 2 shows the contour graphs of the outer boundsof the min-max latency on the z − z plane for fourtypical cases. • Dense contour regions indicate the regions wherethe minimized maximum latency is sensitive in z or z ; . . . . . . . . . . z ε =0, ε =0.5, both clusters 1 & 2 decoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (a) . . . . .
81 1 . . . . . . . . . . . . z ε =0, ε =0, cluster 1 decoding, cluster 2 non−decoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (b) . . . . . . . . . . . . . . . . .
82 2 z z ε =0.5, ε =0, cluster 1 decoding, cluster 2 non−decoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (c)Fig. 2. Contour graphs of the numerical lower bounds of the min-max latency. Contours define the outer bounds of achievable ( z , z ) regions given a specific number of transmitted output symbols. Drawnfrom the solution to Problem (11). Vertical(or horizontal) contour sections indicate re-gions where z (or z ) is the bottleneck of latency; • Steep(or gradual) contour sections indicate z (or z )-dominant regions: reducing z (or z ) a bittrades for a bigger advance in z (or z ) for fixedmin-max latency. These are the regions where thedegree distribution of the LT encoder could be finelytuned for the two clusters to finish reception at thesame time.Figures 3(a) and 3(b) show respectively the contourgraphs of the outer bounds of the max-min channelutilization when both sink clusters are decoding andwhen one cluster is decoding but the other is not. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . z z both clusters 1 & 2 decoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (a) . . . . . . . . . . . . . . . . . . . z z cluster 1 decoding, cluster 2 non−decoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (b)Fig. 3. Contour graphs of the achievable max-min channel utilization.Drawn from the solution to Problem (12). • The results are irrelevant to the channel quality; • Both clusters are decoding (Figure 3(a)): – For uniform demand z = z = z , channelutilization is the same as the slope of theoutbound curve in the z − r plot in [4]: lowestas z approaches . and highest when z is near or . – For non-uniform demand however diverse, themax-min channel utilization is better than ; • Cluster 1 is decoding while cluster 2 is not(Figure3(b)): – Max-min channel utilization decreases withincreasing z ; – The “lowest in the middle” phenomenon couldstill be observed when z is small; – The minimum channel utilization could dropbelow .For the results of maximizing the minimum through-put, we choose to show the outbounds of the optimalsolutions for z = z under different channel anddecoding conditions in Figure 4. =z m a x i m i z ed m i n t h r oughpu t ε =0, ε =0.2, c =1, c =0 ε =0, ε =0.4, c =1, c =0 ε =0.2, ε =0, c =1, c =0 ε =0.4, ε =0, c =1, c =0 ε =0, ε =0, c =1, c =0 ε =0, ε =0, c =c =1 ε =0, ε =0.2, c =c =1 ε =0.4, ε =0, c =c =1 Fig. 4. Max-min throughput versus z = z = z under variouschannel conditions. Drawn from the solution to Problem (14). As shown in Figure 4 • The max-min throughput cannot go over the capac-ity of the worse channel, as expected; • The curves for both clusters decoding in differentchannel conditions are almost parallel and similarto the trend of the channel utilization, which is alsoexpected because of the uniform demand assumedhere; • The curves for cluster 1 decoding and cluster 2 non-decoding is always dropping with the growth of z ;however, when the demand is not uniform, when z is small enough and z large enough, an increasein throughput could still be observed; • The distance between the outerbound max-minthroughput curves for one cluster decoding andthe other not becomes smaller as z = z = z rows larger, which implies the less sensitivityof the optimized minimum throughput to channelconditions when z is larger.Figure 5 shows the solution to Problem (16), mini-mizing the average latency. • As shown in Figure 5(a), on a perfect channel, evenwhen half of the output symbols are of degree-1, adecoding sink may be able to decode 99% of all theinformation symbols with an overhead of less than16% of the size of the set of information symbols; • As shown in 5(c), as the portion of decoding clusterincreases from 0 to 1, the fraction of degree-1output symbols in the optimized degree distributiongracefully decreases from 1 to 0.
B. Comparison of Performance
Table I lists a comparison of the total number oftransmitted symbols to fulfill the demands of two clustersunder four streaming schemes: • Scheme A0: The source sends a single stream to allsinks, minimizing max latency. • Scheme A1: The source sends a single stream to allsinks, minimizing latency of cluster 1. • Scheme A2: The source sends a single stream to allsinks, minimizing latency of cluster 2. • Scheme A12: The source sends two independentstreams to the clusters, each minimizing latency ofthe targeted cluster.
TABLE IC
OMPARISON OF T OTAL N UMBER OF T RANSMITTED S YMBOLS U NDER F OUR S TREAMING S CHEMES ( z , c , ǫ ) Scheme A0 Scheme A1 ( z , c , ǫ ) either total either totalcluster 1 (0.98,1,0) 1.5634 ∞ cluster 2 (0.72,0,0) 1.5634 ∞ cluster 1 (0.98,1,0) 1.6220 cluster 2 (0.63,1,0.5) 1.6220 1.9828 ( z , c , ǫ ) Scheme A2 Scheme A12cluster 1 (0.98,1,0) 3.9120 cluster 2 (0.72,0,0) 1.2730 1.2730cluster 1 (0.98,1,0) 1.9959 cluster 2 (0.63,1,0.5) 1.5782 1.5782
Scheme A0 performs significantly better than SchemesA1, A2 and A12 in terms of the total number of outputsymbols transmitted by the source.When considering average latency for multicastingto both decoding and non-decoding clusters on perfectchannels, however, it could be seen from Figure 5(b)that transmitting separately encoded streams on separatechannels(Scheme A12) is better than transmitting a sin-gle stream(Scheme A0). M i n i m u m La t en cy ε =0 z=0.9 decodingz=0.9 non−decodingz=0.99 decodingz=0.99 non−decoding (a) Minimum achievable latency of the decoding clustervs. p over perfect channel /n ε = ε =0, cluster 1 decoding, cluster 2 nondecoding p average latency z =z =0.9, single streamz =z =0.9, single streamz =z =0.99, single streamz =z =0.99, single streamz =z =0.9, separate streamsz =z =0.99, separate streams (b) Minimum average latency and achieving p versus sizeof decoding cluster, z = z . . . . . . . . . . z z ε = ε =0, cluster 1 decoding, cluster 2 nondecoding0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.900.10.20.30.40.50.60.70.80.9 (c) Minimum average latency contour graph for half-halfdecoding-non-decodingFig. 5. Results for minimizing average latency, Problem (16). I. F
INITE -L ENGTH S IMULATION
Figure 6(a) gives the simulated sample curves ofinformation recovery versus latency when the decodingcluster targets at recovering 80% of the input symbolsand the non-decoding cluster targets at recovering 40%.The distribution of the latency till the two clustersachieve targeted information recovery is given in 6(b).The empirical average value of t is . , . greaterthan the optimization result t ∗ = 1 . , which is inacceptable error range. o f i npu t sy m bo l s r e c o v e r ed / t o t a l o f i npu t sy m bo l s k=800, ε = ε =0 cluster 1 decodingcluster 2 non−decoding0.9 0.95 1 1.05 1.1 1.1505101520253035 e m p i r i c a l p r obab ili t y d i s t r i bu t i on t t t Fig. 6. (a)Finite-length simulated time progress of information recov-ery for degree distribution P ( x ) = 0 . x +0 . x +0 . x ,optimized for min-max latency and z = 0 . , z = 0 . , ǫ = ǫ = 0 ,the number of information symbols being k = 800 . 5 simulationinstances plotted. (b)Empirical probability distribution of latency t and t , obtained from 100 samples; mean of t is . , standarddeviation . ; mean of t is . , standard deviation . ;mean of t = max { t , t } is 1.0718, standard deviation 0.0300.Optimization results give that for ( z , z ) = (0 . , . , min-maxlatency is t ∗ = 1 . . VII. C
ONCLUDING R EMARKS
In this work, we have investigated the performance ofLT rateless codes for streaming from a single server todiverse users. The degree distributions of the LT-outputsymbols have been optimized according to network pa-rameters. The degree distribution optimization problemshave been formulated in the asymptotic regime andsolved numerically, and simulations have been conductedto confirm the usability of the asymptotic results as aguideline for finite-length code design. The impact ofdiversity in channel conditions, non-uniform demandsand coding methods of users on transmission latency,channel utilization and throughput have also been shownthrough the optimization results. As demonstrated inSection V, following our scheme, the total bandwidthconsumption for satisfying diverse users is considerablyreduced compared to either sending separate streams fordifferent users or sending a stream that is optimized foronly one of the users.R
EFERENCES[1] Michael Luby. LT codes. In
The 43rd Annual IEEE Symposiumon Foundations of Computer Science , pages 271–280, November2002.[2] A. Shokrollahi. Raptor codes.
IEEE Trans. Inf. Th. , 52(6):2551–67, 2006.[3] 3rd Generation Partnership Project (3GPP).
Technical Specifi-cation Group Services and System Aspects; Multimedia Broad-cast/Multicast Services (MBMS); Protocols and Codecs (Release6) , 2005.[4] Sujay Sanghavi. Intermediate Performance of Rateless Codes. In
Information Theory Workshop ITW 2007 .[5] V. K. Goyal. Multiple Description Coding: Compression Meetsthe Network.
IEEE Signal Proc. Magazine , 18(5):74–94, 2001.[6] R. Karp; M. Luby; A. Shokrollahi. Finite Length Analysis of LTCodes. In
International Symposium on Information Theory, ISIT2004 , pages 39–, June 2004.[7] G. Maatouk and A. Shokrollahi. Analysis of the Second Momentof the LT Decoder.
ArXiv e-prints , February 2009.[8] William Feller.
An Introduction to Probability Theory and ItsApplications , volume 1, page 225. John Wiley & Sons, thirdedition, 1970.[9] C. Fragouli; E. Soljanin.