MinJi Kim
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by MinJi Kim.
performance evaluation methodolgies and tools | 2011
MinJi Kim; Muriel Médard; João Barros
We analyze the performance of TCP and TCP with network coding (TCP/NC) in lossy wireless networks. We build upon the simple framework introduced by Padhye et al. and characterize the throughput behavior of classical TCP as well as TCP/NC as a function of erasure rate, round-trip time, maximum window size, and duration of the connection. Our analytical results show that network coding masks random erasures from TCP, thus preventing TCPs performance degradation in lossy networks (e.g. wireless networks). It is further seen that TCP/NC has significant throughput gains over TCP. Our analysis and simulation results show very close concordance and support that TCP/NC is robust against erasures. TCP/NC is not only able to increase its window size faster but also to maintain a large window size despite the random losses, whereas TCP experiences window closing because losses are mistakenly attributed to congestion. Note that network coding only masks random erasures, and allows TCP to react to congestion; thus, when there are correlated losses, TCP/NC also closes its window.
international conference on computer communications | 2010
MinJi Kim; Daniel E. Lucani; Xiaomeng Shi; Fang Zhao; Muriel Médard
Multi-resolution codes enable multicast at different rates to different receivers, a setup that is often desirable for graphics or video streaming. We propose a simple, distributed, two-stage message passing algorithm to generate network codes for single-source multicast of multi-resolution codes. The goal of this pushback algorithm is to maximize the total rate achieved by all receivers, while guaranteeing decodability of the base layer at each receiver. By conducting pushback and code assignment stages, this algorithm takes advantage of inter-layer as well as intra-layer coding. Numerical simulations show that in terms of total rate achieved, the pushback algorithm outperforms routing and intra-layer coding schemes, even with field sizes as small as 2^10(10 bits). In addition, the performance gap widens as the number of receivers and the number of nodes in the network increases. We also observe that naive inter-layer coding schemes may perform worse than intra-layer schemes under certain network conditions.
allerton conference on communication, control, and computing | 2010
MinJi Kim; Muriel Médard
The deterministic wireless relay network model, introduced by Avestimehr et al., has been proposed for approximating Gaussian relay networks. This model, known as the ADT network model, takes into account the broadcast nature of wireless medium and interference. Avestimehr et al. showed that the Min-cut Max-flow theorem holds in the ADT network.
IEEE Journal on Selected Areas in Communications | 2010
MinJi Kim; Lu¿sa Lima; Fang Zhao; João Barros; Muriel Médard; Ralf Koetter; Ton Kalker; Keesook J. Han
Random linear network coding can be used in peer-to- peer networks to increase the efficiency of content distribution and distributed storage. However, these systems are particularly susceptible to Byzantine attacks. We quantify the impact of Byzantine attacks on the coded system by evaluating the probability that a receiver node fails to correctly recover a file. We show that even for a small probability of attack, the system fails with overwhelming probability. We then propose a novel signature scheme that allows packet-level Byzantine detection. This scheme allows one-hop containment of the contamination, and saves bandwidth by allowing nodes to detect and drop the contaminated packets. We compare the net cost of our signature scheme with various other Byzantine schemes, and show that when the probability of Byzantine attacks is high, our scheme is the most bandwidth efficient.
IEEE Journal on Selected Areas in Communications | 2011
MinJi Kim; Muriel Médard; João Barros
We propose a secure scheme for wireless network coding, called the algebraic watchdog. By enabling nodes to detect malicious behaviors probabilistically and use overheard messages to police their downstream neighbors locally, the algebraic watchdog delivers a secure global self-checking network. Unlike traditional Byzantine detection protocols which are receiver-based, this protocol gives the senders an active role in checking the node downstream. The key idea is inspired by Marti et al.s watchdog-pathrater, which attempts to detect and mitigate the effects of routing misbehavior. We first focus on a two-hop network. We present a graphical model to understand the inference process nodes execute to police their downstream neighbors; as well as to compute, analyze, and approximate the probabilities of misdetection and false detection. We also present an algebraic analysis of the performance using an hypothesis testing framework that provides exact formulae for probabilities of false detection and misdetection. We then extend the algebraic watchdog to a more general network setting, and propose a protocol in which we can establish trust in coded systems in a distributed manner. We develop a graphical model to detect the presence of an adversarial node downstream within a general multi-hop network. The structure of the graphical model (a trellis) lends itself to well-known algorithms (e.g. the Viterbi algorithm) which can compute the probabilities of misdetection and false detection. We show that as long as the min-cut is not dominated by the adversaries, upstream nodes can monitor downstream neighbors and allow reliable communication with certain probability. Finally, we present simulation results that support our analysis.
information theory workshop | 2011
Bernhard Haeupler; MinJi Kim; Muriel Médard
We analyze distributed and packetized implementations of random linear network coding (PNC) with buffers. In these protocols, nodes store received packets to later produce coded packets that reflect this information. We show the optimality of PNC for any buffer size; i.e., we show that PNC performs at least as good as any protocols with the same buffer size. In other words, a multicast task completes at exactly the first time in which in hindsight it was possible to route information from the sources to each receiver individually given the buffer constraint, i.e., that the buffer used at each node never exceeds its buffer size. This shows that PNC, even without any feedback or explicit buffer management, allows to keep minimal buffer sizes while maintaining its optimal performance.
military communications conference | 2008
MinJi Kim; Muriel Médard; João Barros
Network coding increases throughput and is robust against failures and erasures. However, since it allows mixing of information within the network, a single corrupted packet generated by a Byzantine attacker can easily contaminate the information to multiple destinations. In this paper, we study the transmission overhead associated with three different schemes for detecting Byzantine adversaries at a node using network coding: end-to-end error correction, packet-based Byzantine detection scheme, and generation-based Byzantine detection scheme. In end-to-end error correction, it is known that we can correct up to the min-cut between the source and destinations. However, if we use Byzantine detection schemes, we can detect polluted data, drop them, and therefore, only transmit valid data. For the dropped data, the destinations perform erasure correction, which is computationally lighter than error correction. We show that, with enough attackers present in the network, Byzantine detection schemes may improve the throughput of the network since we choose to forward only reliable information. When the probability of attack is high, a packet-based detection scheme is the most bandwidth efficient; however, when the probability of attack is low, the overhead involved with signing each packet becomes costly, and the generation-based scheme may be preferred. Finally, we characterize the tradeoff between generation size and overhead of detection in bits as the probability of attack increases in the network.
IEEE Transactions on Information Theory | 2014
Elona Erez; MinJi Kim; Yun Xu; Edmund M. Yeh; Muriel Médard
The capacity of multiuser networks has been a long-standing problem in information theory. Recently, Avestimehr et al. have proposed a deterministic network model to approximate multiuser wireless networks. This model, known as the ADT network model, takes into account the broadcast nature as well as the multiuser interference inherent in the wireless medium. For the types of connections we consider, we show that the results of Avestimehr et al. under the ADT model can be reinterpreted within the algebraic network coding framework introduced by Koetter and Médard. Using this framework, we propose an efficient distributed linear code construction for the deterministic wireless multicast relay network model. Unlike several previous coding schemes, we do not attempt to find flows in the network. Instead, for a layered network, we maintain an invariant where it is required that at each stage of the code construction, certain sets of codewords are linearly independent.
international conference on communications | 2014
MinJi Kim; Jason Cloud; Ali ParandehGheibi; Leonardo Urbina; Kerim Fouli; Douglas J. Leith; Muriel Médard
The application of congestion control can have a significant detriment to the quality of service experienced at higher layers, especially under high packet loss rates. The effects of throughput loss due to the congestion control misinterpreting packet losses in poor channels is further compounded for applications such as HTTP and video leading to a significant decrease in the users quality of service. Therefore, we consider the application of congestion control to transport layer packet streams that use error-correction coding in order to recover from packet losses. We introduce a modified AIMD approach, develop an approximate mathematic model suited to performance analysis, and present extensive experimental measurements in both the lab and the “wild” to evaluate performance. Our measurements highlight the potential for remarkable performance gains, in terms of throughput and upper layer quality of service, when using coded transports.
international conference on communications | 2013
Giuliano Pezzolo Giacaglia; Xiaomeng Shi; MinJi Kim; Daniel E. Lucani; Muriel Médard
A characterization of systematic network coding over multi-hop wireless networks is key towards understanding the trade-off between complexity and delay performance of networks that preserve the systematic structure. This paper studies the case of a relay channel, where the sources objective is to deliver a given number of data packets to a receiver with the aid of a relay. The source broadcasts to both the receiver and the relay using one frequency, while the relay uses another frequency for transmissions to the receiver, allowing for a full-duplex operation of the relay. We analyze the decoding complexity and delay performance of two types of relays: one that preserves the systematic structure of the code from the source; another that does not. A systematic relay forwards uncoded packets upon reception, but transmits coded packets to the receiver after receiving the first coded packet from the source. On the other hand, a non-systematic relay always transmits linear combinations of previously received packets. We compare the performance of these two alternatives by analytically characterizing the expected transmission completion time as well as the number of uncoded packets forwarded by the relay. Our numerical results show that, for a poor channel between the source and the receiver, preserving the systematic structure at the relay (i) allows a significant increase in the number of uncoded packets received by the receiver, thus reducing the decoding complexity, and (ii) preserves close to optimal delay performance.