A Multi-hop Multi-source Algebraic Watchdog
aa r X i v : . [ c s . CR ] J u l A Multi-hop Multi-source Algebraic Watchdog
MinJi Kim ∗ , Muriel M´edard ∗ , Jo˜ao Barros †∗ Research Laboratory of Electronics † Instituto de Telecommunicac¸˜oesMassachusetts Institute of Technology Departamento de Engenharia Electrot´ecnica e de ComputadoresCambridge, MA 02139, USA Faculdade de Engenharia da Universidade do Porto, PortugalEmail: { minjikim, medard } @mit.edu Email: [email protected] Abstract —In our previous work (‘An Algebraic Watchdog forWireless Network Coding’), we proposed a new scheme in whichnodes can detect malicious behaviors probabilistically, police theirdownstream neighbors locally using overheard messages; thus,provide a secure global self-checking network . As the first buildingblock of such a system, we focused on a two-hop network,and presented a graphical model to understand the inferenceprocess by which nodes police their downstream neighbors andto compute the probabilities of misdetection and false detection.In this paper, we extend the Algebraic Watchdog to a moregeneral network setting, and propose a protocol in which wecan establish trust in coded systems in a distributed manner.We develop a graphical model to detect the presence of anadversarial node downstream within a general two-hop network.The structure of the graphical model (a trellis) lends itselfto well-known algorithms, such as Viterbi algorithm, that cancompute the probabilities of misdetection and false detection.Using this as a building block, we generalize our scheme tomulti-hop networks. We show analytically that as long as themin-cut is not dominated by the Byzantine adversaries, upstreamnodes can monitor downstream neighbors and allow reliablecommunication with certain probability. Finally, we presentpreliminary simulation results that support our analysis.
I. I
NTRODUCTION
We consider the problem of Byzantine detection in a codedwireless network. Previous work on Byzantine detection fo-cused on receiver-based protocols, in which the destinationnodes of the corrupted data detects the presence of an adver-sary upstream. However, this detection may come too late asthe adversary is partially successful in disrupting the network(even if it is detected). It has wasted network bandwidth, whilethe source is still unaware of the need for retransmission.In our previous work [1], we proposed a new schemecalled the algebraic watchdog , in which nodes can detectmalicious behaviors probabilistically by taking advantage ofthe broadcast nature of the wireless medium. The algebraicwatchdog was inspired by the an analogous protocol forrouting wireless networks, called the watchdog and pathrater [2]. The key difference between the our previous work [1] andthat of [2] is that we allow network coding. Network coding[3][4] is advantageous as it increases throughput, is robustagainst failures/erasures, and is resilient in dynamic networks.The key challenge in algebraic watchdog is that, by incor-porating network coding, we can no longer recognize packetsindividually. In [2], a node v can monitor its downstreamneighbor v ′ by checking that the packet transmitted by v ′ isa copy of what v transmitted to v ′ . However, with networkcoding, this is no longer possible as transmitted packets are afunction of the received packets. Furthermore, v may not have full information regarding the packets received at v ′ ; thus,node v is faced with the challenge of inferring the packetsreceived at v ′ and ensuring that v ′ is transmitting a validfunction of the received packets. We note that [5] combinessource coding with watchdog; thus, [5] does not face the sameproblem as that of algebraic watchdog.II. P ROBLEM S TATEMENT
We use elements from a field, and their bit-representation.We use the same character in italic font ( i.e. x ) for the fieldelement, in bold font ( i.e. x ) for the bit-representation, andin underscore bold font ( i.e. x ) for vectors. For arithmeticoperations in the field, we shall use the conventional notation( i.e. + , − , · ). For bit-wise addition, we use ⊕ .The problem statement for this paper is similar to thatin our previous work [1]. A wireless network is modeledusing a directed graph G = ( V, E , E ) , where V is theset of network nodes, E the set of intended transmissions,and E the set of interference channels. If ( v i , v j ) ∈ E and ( v i , v k ) ∈ E where v i , v j , v k ∈ V , then there is anintended transmission from v i to v j , and v k can overhearthis transmission with noise (modeled using binary symmetricchannel BSC ( p ik ) ). Node v i ∈ V transmits a coded packet p i , where p i = [ a i , h I i , h x i , x i ] is a { , } -vector. A validpacket p i is defined as below: • a i corresponds to the coding coefficients α j , j ∈ I i ,where I i ⊆ V is the set of nodes adjacent to v i in E , • h I i corresponds to the hash h ( x j ) , v j ∈ I i where h ( · ) isa δ -bit polynomial hash function, • h x i corresponds to the polynomial hash h ( x i ) , • x i is the n -bit representation of x i = P j ∈ I α j x j .The payload x i is coded with a ( n, k i ) -code C i withminimum distance d i . Code C i is an error-correcting code ofrate R i = k i n = 1 − d i n , and is tailored for the forward commu-nication. For instance, v uses code C , chosen appropriatelyfor the channel ( v , v j ) ∈ E , to transmit the payload x .We assume that the payload x i is n -bits, and the hash h ( · ) is δ -bits. We assume that the hash function used, h ( · ) , is knownto all nodes, including the adversary. In addition, we assumethat a i , h I i and h x i are part of the header information, and aresufficiently coded to allow the nodes to correctly receive themeven under noisy channel conditions. Note that the hashes h I i and h x i are contained within one hop, and the overheadassociated with the hashes is proportional to the in-degree ofa node, and does not accumulate with the routing path length. v v m-1 v m p p p m-1 p m v m+1 v m+2 p m+1 edges in E edges in E Fig. 1: A small neighborhood of a wireless network with v .Assume that v i transmits p i = [ a i , h I i , h x i , ˆx i ] , where ˆx i = x i ⊕ e , e ∈ { , } n . If v i is misbehaving, then e = 0 . Our goalis to probabilistically detect when e = 0 . Note that even if | e | is small ( i.e. small Hamming distance between ˆx i and x i ), thealgebraic interpretation of ˆx i and x i may differ significantly. A. Threat Model
We assume powerful adversaries, who can eavesdrop itsneighbor’s transmissions, has the power to inject or corruptpackets, and are computationally unbounded. However, theadversary does not know the specific realization of the randomerrors introduced by the channels. The adversaries’ objectiveis to corrupt the information flow without being detected byother nodes. Thus, the adversary will find ˆx i that will allowits misbehavior to be undetected, if there is any such ˆx i .Our goal is to detect probabilistically a malicious behaviorthat is beyond the channel noise, represented by BSC ( p ik ) .Note that the algebraic watchdog does not completely elim-inate errors introduced by the adversaries; its objective is tolimit the errors introduced by the adversaries to be at mostthat of the channel. Channel errors (or those introduced byadversaries below the channel noise level) can be correctedusing appropriate error correction schemes, which will benecessary even without Byzantine adversaries in the network.The notion that adversarial errors should sometimes betreated as channel noise has been introduced previously in[6]. Under heavy attack, attacks should be treated with specialattention; while under light attack, the attacks can be treatedas noise and corrected using error-correction schemes. Theresults in this paper partially reiterate this idea.III. A LGEBRAIC W ATCHDOG FOR T WO - HOP NETWORK
Consider a small neighborhood of nodes in G with nodes v , v , ...v m , v m +1 , v m +2 . Nodes v i , i ∈ [1 , m ] , want totransmit x i to v m +2 via v m +1 . Without complete informationabout all the messages, we cannot verify whether v m +1 is misbehaving or not with certainty. However, there is alarge overhead associated with acquiring complete informa-tion. Therefore, we take advantage of the wireless setting,in which nodes can overhear their neighbors’ transmission(as shown in Figure 1), to probabilistically detect maliciousbehavior. Each node checks whether its downstream neighborsare transmitting values that are consistent with the gatheredinformation. If a node detects that its downstream neighbor ismisbehaving, it can alert other nodes within the network.The graphical model illustrates the inference process anode executes to check its next hop node. Without loss of generality, we consider the problem from v ’s perspective.Denote ˜ x i to be the noisy message v overhears from node v i ’stransmission, for any i ∈ [2 , m ] . Since the header is protected, v correctly decodes h ( x i ) and α i for all i ∈ [1 , m ] . A. Transition matrix
We define a transition matrix T i to be a n (1 − H ( din ))+ δ × n (1 − H ( din )) matrix, where H ( · ) is the entropy function. T i (˜ x i , y ) = ( p i (˜ x i ,y ) N , if h ( y ) = h ( x i )0 , otherwise ,p i (˜ x i , y ) = p ∆( ˜x i , y ) i (1 − p i ) n − ∆( ˜x i , y ) , N = X { y | h ( y )= h ( x i ) } p i (˜ x i , y ) , where ∆( x , y ) gives the Hamming distance between code-words x and y . In other words, v computes ˜ X i = { x | h ( x ) = h ( x i ) } to be the list of candidates of x i . For any overheardpair [ ˜x i , h ( x i )] , there are multiple candidates of x i ( i.e. | ˜ X i | )although the probabilities associated with each inferred x i aredifferent. This is because there are uncertainties associatedwith the wireless medium, represented by BSC ( p i ) .For each x ∈ ˜ X i , p i (˜ x i , x ) gives the probability of x beingthe original codeword sent by node v i given that v overheard ˜ x i under BSC ( p i ) . Since we are only considering x ∈ ˜ X i ,we normalize the probabilities using N to get the transitionprobability T i (˜ x i , x ) . Note T i (˜ x i , y ) = 0 if h ( y ) = h ( x i ) .The structure of T i heavily depends on the collisions ofthe hash function h ( · ) in use. Note that the structure of T i is independent of i , and therefore, a single transition matrix T can be precomputed for all i ∈ [1 , m ] given the hashfunction h ( · ) . A graphical representation of T is shown inFigure 2a. For simplicity of notation, we represent T as amatrix; however, the transition probabilities can be computedefficiently using hash collision lists as well. B. Watchdog trellis
Node v uses the information gathered to generate a trellis,which is used to infer the valid linear combination that v m +1 should transmit to v m +2 . As shown in Figure 2b, thetrellis has m layers: each layer may contain up to n states,each representing the inferred linear combination so far. Forexample, Layer i consist of all possible values of P ij =1 α j x j .The matrices T i , i ∈ [2 , m ] , defines the connectivity of thetrellis. Let s and s be states in Layer i − and Layer i ,respectively. Then, an edge ( s , s ) exists if and only if ∃ x such that s + α i x = s , T i (˜ x i , x ) = 0 . We denote w e ( · , · ) to be the edge weight, where w e ( s , s ) = T i (˜ x i , x ) if edge ( s , s ) exists, and zero otherwise. C. Viterbi-like algorithm
We denote w ( s, i ) to be the weight of state s in Layer i .Node v selects a start state in Layer 1 corresponding to α x ,as shown in Figure 2. The weight of Layer 1 state is w ( s,
1) =1 if s = α x , zero otherwise. For the subsequent layers, verheard information [ x(cid:2) i ,h(x i )] Inferred information x i Start node(Overheard)
Layer 1 α x Start state
Layer 2 α x + α x Layer 3 α x + α x + α x Layer m-1 Σ ≤ i ≤ m-1 α i x i Layer m Σ ≤ i ≤ m α i x i Overheard information [x(cid:2) m+1 ,h(x m+1 )] Inferred information x m+1 End node (Overheard) s s s Inferred linear combinations Σ ≤ i ≤ m α i x i (a) Transition matrix T i ( x(cid:5) i ,x i ) (b) Trellis for Algebraic Watchdog (c) Inverse transition matrix T -1 (x m+1 , x(cid:5) m+1 ) Fig. 2: Graphical representation of the inference process at node v . In the trellis, the transition probability from Layer i − to Layer i is given by T i (˜ x i , x i ) , which is shown in (a).multiple paths can lead to a given state, and the algorithmkeeps the aggregate probability of reaching that state. To bemore precise, w ( s, i ) is: w ( s, i ) = X ∀ s ′ ∈ Layer i − w ( s ′ , i − · w e ( s ′ , s ) . By definition, w ( s, i ) is equal to the total probability of s = P ij =1 α j x j given the overheard information. Therefore, w ( s, m ) gives the probability that s is the valid linear com-bination that v m +1 should transmit to v m +2 . It is importantnote that w ( s, m ) is dependent on the channel statistics,as well as the overheard information. For some states s , w ( s, m ) = 0 , which indicates that state s can not be a validlinear combination; only those states s with w ( s, m ) > arethe inferred candidate linear combinations .Note that the algorithm introduced above is a dynamicprogram, and is similar to the Viterbi algorithm. Therefore,tools developed for dynamic programming/Viterbi algorithmcan be used to compute the probabilities efficiently. D. Decision making
Node v computes the probability that the overheard ˜ x m +1 and h ( x m +1 ) are consistent with the inferred w ( · , m ) to makea decision regarding v m +1 ’s behavior. To do so, v constructsan inverse transition matrix T − , which is a n (1 − dm +1 n ) × n (1 − dm +1 n )+ δ matrix whose elements are defined as follows: T − ( y, ˜ x m +1 ) = ( p m +1 (˜ x m +1 ,y ) M , if h ( y ) = h ( x m +1 )0 , otherwise , M = X { y | h ( y )= h ( x m +1 ) } p m +1 (˜ x m +1 , y ) . Unlike T introduced in Section III-A, T − ( x, ˜ x m +1 ) givesthe probability of overhearing [˜ x m +1 , h ( x m +1 )] given that x ∈ { y | h ( y ) = h ( x m +1 ) } is the original codeword sent by v m +1 and the channel statistics. Note that T − is identicalto T except for the normalizing factor M . A graphicalrepresentation of T − is shown in Figure 2c.In Figure 2c, s and s are the inferred candidate linearcombinations, i.e. w ( s , m ) = 0 and w ( s , m ) = 0 ; the end node indicates what node v has overheard from v m +1 .Note that although s is one of the inferred linear combina-tions, s is not connected to the end node. This is because h ( s ) = h ( x m +1 ) . On the other hand, h ( s ) = h ( x m +1 ) ; as aresult, s is connected to the end node although w ( s , m ) = 0 .We define an inferred linear combination s as matched if w ( s, m ) > and h ( s ) = h ( x m +1 ) .Node v uses T − to compute the total probability p ∗ ofhearing [˜ x m +1 , h ( x m +1 )] given the inferred linear combina-tions by computing the following equation: p ∗ = X ∀ s w ( s, m ) · T − ( s, ˜ x m +1 ) . Probability p ∗ is the probability of overhearing ˜ x m +1 giventhe channel statistics; thus, measures the likelihood that v m +1 is consistent with the information gathered by v . Node v canuse p ∗ to make a decision on v m +1 ’s behavior. For example, v can use a threshold decision rule to decide whether v m +1 ismisbehaving or not: v claims that v m +1 is malicious if p ∗ ≤ t where t is a threshold value determined by the given channelstatistics; otherwise, v claims v m +1 is well-behaving.Depending on the decision policy used, we can use thehypothesis testing framework to analyze the probability offalse positive and false negative. Reference [1] provides suchanalysis for the simple two-hop network. However, the purposeof this paper is not to propose a decision policy, but to proposea method in which we can compute p ∗ , which can be used toestablish trust within a network. We note that it would beworthwhile to look into specific decision policies and theirperformance ( i.e. false positive/negative probabilities) as in [1].IV. A NALYSIS FOR T WO - HOP N ETWORK
In this section, we provide an analysis for the performanceof Algebraic Watchdog for two-hop network.
Theorem 4.1:
Consider a two-hop network as shown inFigure 1. Consider node v j , j ∈ [1 , m ] . Then, the numberof matched codewords is: n hP i = j,i ∈ [1 ,m +1] (cid:16) H ( p ij ) − H ( din ) (cid:17) − i − mδ . Proof:
Without loss of generality, we consider node v . The proof uses on concepts and techniques developedfor list-decoding [7]. We first consider the overhearing of v k ’s transmission, k ∈ [2 , m ] . Node v overhears ˜ x k from v k . The noise introduced by the overhearing channel ischaracterized by BSC ( p k ) ; thus, E [∆( x k , ˜x k )] = np k .ow, we consider the number of codewords that are within B (˜ x k , np k ) , the Hamming ball of radius np k centered at ˜ x k is | B (˜ x k , np k ) | = 2 n ( H ( p k ) − H ( dkn )) . Note that v overhearsthe hash h ( x k ) ; thus, the number of codewords that v consid-ers is reduced to n ( H ( p k ) − H ( dkn )) − δ . Using this information, v computes the set of inferred linear combinations, i.e. s where w ( s, m ) > . Note that v knows precisely the valuesof x . Therefore, the number of inferred linear combinationsis upper bounded by: Y k ∈ [2 ,m ] (cid:18) n (cid:16) H ( p k ) − H ( dkn ) (cid:17) − δ (cid:19) (1) = 2 n hP k ∈ [2 ,m ] (cid:16) H ( p k ) − H ( dkn ) (cid:17)i − ( m − δ (2)Note that due to the finite field operations, these inferred linearcombinations are randomly distributed over the space { , } n .Now, we consider the overheard information, ˜ x m +1 fromthe downstream node v m +1 . By similar analysis as above,we can derive that there are n ( H ( p m +1 , ) − H ( dm +1 n )) − δ code-words in the hamming ball B (˜ x m +1 , np m +1 , ) with hashvalue h ( x m +1 ) . Thus, the probability that a randomly chosencodeword in the space of { , } n is in B (˜ x m +1 , np m +1 , ) ∩{ x | h ( x ) = h ( x m +1 ) } is: n ( H ( p m +1 , ) − H ( dm +1 n )) − δ n . (3)Then, the expected number of matched codewords is theproduct of Equations (2) and (3).Note that if we assume that the hash is of length δ = εn ,then the statement in Theorem 4.1 is equal to: n hP i = j,i ∈ [1 ,m +1] H ( p ij ) − (cid:16)P i = j,i ∈ [1 ,m +1] H ( din )+1+ mε (cid:17)i . (4)This highlights the tradeoff between the quality of overhearingchannel and the redundancy (introduced by C i ’s and the hash h ). If enough redundancy is introduced, then C i and h togetherform an error-correcting code for the overhearing channels;thus, allows exact decoding to a single matched codeword.The analysis also shows how adversarial errors can be inter-preted. Assume that v m +1 wants to inject errors at rate p adv .Then, node v , although has an overhearing BSC ( p m +1 , ) ,effectively experiences an error rate of p adv + p m +1 , − p adv · p m +1 , . Note that this does not change the set ofthe inferred linear combinations; but it affects ˜ x m +1 . Thus,overall, adversarial errors affect the set of matched codewordsand the distribution of p ∗ . As we shall see in Section VI, thedifference in distribution of p ∗ between a well-behaving relayand adversarial relay can be used to detect malicious behavior.V. P ROTOCOL FOR A LGEBRAIC W ATCHDOG
In this section, we use the two-hop algebraic watchdogfrom Section III in a hop-by-hop manner to ensure a globallysecure network. In Algorithm 1, we present a distributedalgorithm for nodes to secure the their local neighborhood.Each node v transmits/receives data as scheduled; however,node v randomly chooses to check its neighborhood, at which point node v listens to neighbors transmissions to perform thetwo-hop algebraic watchdog from Section III. Corollary 5.1:
Consider v m +1 as shown in Figure 1. As-sume that the downstream node v m +2 is well-behaving, andthus, forces h x m + = h ( x m +1 ) . Let p i be the packet receivedby v m +1 from parent node v i ∈ P ( v ) . Then, if there exists atleast one well-behaving parent v j ∈ P ( v ) , v m +1 cannot injecterrors beyond the overhearing channel noise ( p m +1 ,j ) withoutbeing detected.Section IV shows that presence of adversarial error (at arate above the channel noise) can be detected by a change indistribution of p ∗ . Note that this Corollary 5.1 does not makeany assumptions on whether packets p i ’s are valid or not.Instead, the claim states that v m +1 transmits a valid packet given the packets p i it has received. Corollary 5.2:
Node v can inject errors beyond the channelnoise only if either of the two conditions are satisfied:1) All its parent nodes P ( v ) = { u | ( u, v ) ∈ E } arecolluding Byzantine nodes;2) All its downstream nodes, i.e. receivers of the transmis-sion p i , are colluding Byzantine nodes. Remark:
Note that, in Case 1) of Corollary 5.2, v is notresponsible to any well-behaving nodes. Node v can transmitany packet without the risk of being detected by any well-behaving parent node. However, the min-cut to v is dominatedby adversaries, and the information flow through v is com-pletely compromised – regardless of whether v is malicious ornot. In Case 2) of the Corollary 5.2, v can generate any hashvalue since its downstream nodes are colluding adversaries.Thus, it is not liable to transmit a consistent hash, whichis necessary for v ’s parent nodes to monitor v ’s behavior.However, note that v is not responsible in delivering any datato a well-behaving node. Even if v were well-behaving, itcannot reach any well-behaving node without going througha malicious node in the next hop. Thus, the information flowthrough v is again completely compromised.Therefore, Corollary 5.2 shows that algebraic watchdog canaid in ensuring correct delivery of data when the followingassumption holds: for every intermediate node v in the pathbetween source to destination, v has at least one well-behavingparent and at least one well-behaving child – i.e. there existsat least a path of well-behaving nodes. This is not a trivialresult as we are not only considering a single-path network,but also multi-hop, multi-path network. foreach node v do According to the schedule, transmit and receive data; if v decides to check its neighborhood then Listen to neighbors’ transmissions; foreach downstream neighbor v ′ do Perform Two-hop Algebraic Watchdog on v ′ ; endendendAlgorithm 1: Distributed algebraic watchdog at v .ABLE I: The average and variance of p ∗ with varying p adv .We set m = 3 , n = 10 , δ = 2 , and p s = p m +1 , = 10% . p adv p ∗ adv var adv p ∗ relay var relay
0% 0.0262 0.0019 0.0262 0.00195% 0.0205 0.0019 0.0259 0.002210% 0.0096 . × − . × − . × − . × − TABLE II: The average and variance of p ∗ with varying δ . Weset m = 3 , n = 10 , p s = p m +1 , = 10% , and p adv = 10% . δ p ∗ adv var adv p ∗ relay var relay . × − . × − . × − . × − . × − . × − VI. S
IMULATIONS
We present preliminary MATLAB simulation results thatshow the difference in distribution of p ∗ between the well-behaving and adversarial relay. We consider a setup in Figure1. We set all p i , i ∈ [2 , m ] to be equal, and we denotethis probability as p s = p i for all i . We denote p adv tobe the probability at which the adversary injects error; thus,the effective error that v observes from an adversarial relayis combined effect of p m +1 , and p adv . The hash function h ( x ) = ax + b mod 2 δ is randomly chosen over a, b ∈ F δ .We set n = 10 ; thus, the coding field size is . For eachdata point, we run the algebraic watchdog 200 times.For simplicity, we assume that the nodes do not use an error-correcting code C i ; thus, d i = 0 for all i . Note that this limitsthe power of the algebraic watchdog; thus, the results showncan be further improved by using error correcting codes C i .We denote p ∗ adv and p ∗ relay as the value of p ∗ when the relayis adversarial and is well-behaving, respectively. We denote var adv and var relay to be the variance of p ∗ adv and p ∗ relay .Our simulation results coincide with our analysis and intu-ition. Section IV noted that v can detect adversarial errors p adv ≥ p m +1 , . As shown in Table I, node v is able tomonitor its downstream node v m +1 when p adv ≥ p m +1 , .There is a significant change in the distribution of p ∗ once p adv ≥ p m +1 , = 10% . Table II shows that increase inredundancy (by using hash functions of length δ ) helps v detect malicious behaviors. Note that for this simulation, δ > gives enough redundancy for v monitor v m +1 .Table III shows that overhearing channel between v and v i , i ∈ [2 , m ] is important in detection. This agrees with ourintuition – if node v is able to infer better the messages x i , the better its detection abilities. Thus, as the overhearingchannel progressively worsens ( p s increase), v ’s ability todetect malicious behavior deteriorates, as shown in Table III.In Table IV, we note the effect of m . Node v ’s ability tocheck v m +1 is reduced with m . When m increases, the numberof messages to infer increases, which increases the uncertaintyin the system. However, Table IV does not take into accountthat with increase in m , there are more v i ’s, i ∈ [1 , m ] that TABLE III: The average and variance of p ∗ with varying p s .We set m = 3 , n = 10 , δ = 2 , and p adv = 10% . p s p ∗ adv var adv p ∗ relay var relay
5% 0.0067 . × − . × − . × − . × −
30% 0.0069 . × − . × − TABLE IV: The average and variance of p ∗ with varying m .We set n = 10 , δ = 2 , p s = p m +1 , = 10% , and p adv = 10% . m p ∗ adv var adv p ∗ relay var relay . × − . × − . × − . × − . × − . × − perform checks on v m +1 independently.VII. C ONCLUSIONS
In this paper, we have proposed a multi-hop, multi-sourcealgebraic watchdog, which allows network coded wirelesssystems to validate its information flows. A node monitorsits downstream node by overhearing transmissions of itsneighboring nodes; and uses the overheard information to inferwhat the behavior of its downstream node should be. Usingthe algebraic watchdog scheme, nodes can compute a proba-bility of misbehavior, which can be used to detect maliciousbehavior. Once a node has been identified as malicious, thesenodes can either be punished/eliminated or excluded from thenetwork by using reputation based schemes [2][8].We have provided a trellis-like graphical model for thedetection inference process, and provided an algorithm thatmay be used to compute the probability that a downstreamnode is consistent with the overheard information. We haveanalytically shown how the size of hash function, minimumdistance of the code used, as well as the overhearing channelquality can affect the probability of detection. Finally, we havepresented preliminary simulation results that coincide with ouranalysis and intuition. R
EFERENCES[1] M. Kim, M. M´edard, J. Barros, and R. K¨otter, “An algebraic watchdogfor wireless network coding,” in
Proceedings of IEEE ISIT , June 2009.[2] S. Marti, T. J. Giuli, K. Lai, and M. Baker, “Mitigating routing misbe-havior in mobile ad hoc networks,” in
Proceedings of the 6th annualinternational conference on Mobile computing and networking . ACM,2000, pp. 255–265.[3] R. Ahlswede, N. Cai, S. R. Li, and R. Yeung, “Network information flow,”
IEEE Transactions on Information Theory , vol. 46, pp. 1204–1216, 2000.[4] R. Koetter and M. M´edard, “An algebraic approach to network coding,”
IEEE/ACM Transaction on Networking , vol. 11, pp. 782–795, 2003.[5] G. Liang, R. Agarwal, and N. Vaidya, “When watchdog meets coding,”in
Proceedings of IEEE INFOCOM , March 2010.[6] M. Kim, M. M´edard, and J. Barros, “Countering Byzantine adversarieswith network coding: An overhead analysis,” in
Proceedings of MILCOM ,2008.[7] P. Elias, “List decoding for noisy channels,”
Technical Report 335,Research Laboratory of Electronics, MIT , 1957.[8] S. Ganeriwal, L. K. Balzano, and M. B. Srivastava, “Reputation-basedframework for high integrity sensor networks,”