Alexander E. Mohr
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander E. Mohr.
IEEE Journal on Selected Areas in Communications | 2000
Alexander E. Mohr; Eve A. Riskin; Richard E. Ladner
We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framework. We use the set partitioning in hierarchical trees coder in this work, but our algorithm can protect any progressive compression scheme. In addition, we promote the use of a PMF of expected channel conditions so that our system can work with almost any model or estimate of packet losses. We find that when optimizing for an exponential packet loss model with a mean loss rate of 20% and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.
international workshop on peer to peer systems | 2005
Vinay Pai; Kapil Kumar; Karthik Tamilmani; Vinay Sambamurthy; Alexander E. Mohr
In this paper, we present Chainsaw, a p2p overlay multicast system that completely eliminates trees. Peers are notified of new packets by their neighbors and must explicitly request a packet from a neighbor in order to receive it. This way, duplicate data can be eliminated and a peer can ensure it receives all packets. We show with simulations that Chainsaw has a short startup time, good resilience to catastrophic failure and essentially no packet loss. We support this argument with real-world experiments on Planetlab and compare Chainsaw to Bullet and Splitstream using MACEDON.
international conference on image processing | 2000
Alexander E. Mohr; Richard E. Ladner; Eve A. Riskin
This paper describes an algorithm that achieves an approximately optimal assignment of forward error correction to progressive data within the unequal loss protection framework. It first finds the optimal assignment under convex hull and fractional bit allocation assumptions. It then relaxes those constraints to find an assignment that approximates the global optimum. The algorithm has a running time of O(hNlogN) where h is the number of points on the convex hull of the sources utility-cost curve and N is the number of packets transmitted.
data compression conference | 1999
Alexander E. Mohr; Eve A. Riskin; Richard E. Ladner
We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.
international conference on image processing | 1999
Agnieszka C. Miguel; Alexander E. Mohr; Eve A. Riskin
We present a simple and efficient scheme for using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm in a generalized multiple description framework. To combat packet loss, controlled amounts of redundancy are added to the original data during the compression process. Unequal loss protection is implemented by varying the amount of redundancy with the importance of data. The algorithm achieves graceful degradation of image quality in the presence of increasing description loss; high image quality is obtained even when over half of the descriptions are lost.
international conference on image processing | 1999
Alexander E. Mohr; Eve A. Riskin; Richard E. Ladner
We present an approach to the generalized multiple description problem (V.K. Goyal et al., 1998) that is fundamentally different from previously published algorithms. Our approach uses explicit channel coding in the form of unequal loss protection to obtain a solution that incorporates many important properties: it can be used with any progressive source coder; it generates a balanced encoding with information equally dispersed among the descriptions; it adds a quantifiable amount of redundancy; it adapts that amount of redundancy to expected channel conditions; and it can optimize for different distortion measures. These properties allow the system to gradually improve image quality as the number of received descriptions increases. We compare our system to previously published results and show that forward error correction in multiple description coding can surpass them by a significant margin.
Algorithmica | 2005
Jason D. Hartline; Edwin S. Hong; Alexander E. Mohr; William Pentney; Emily Rocke
We consider history independent data structures as proposed for study by Naor and Teague. In a history independent data structure, nothing can be learned from the memory representation of the data structure except for what is available from the abstract data structure. We show that for the most part, strong history independent data structures have canonical representations. We provide a natural alternative definition of strong history independence that is less restrictive than Naor and Teague and characterize how it restricts allowable representations. We also give a general formula for creating dynamically resizing history independent data structures and give a related impossibility result.
IEEE Transactions on Circuits and Systems for Video Technology | 2005
Justin Goshi; Alexander E. Mohr; Richard E. Ladner; Eve A. Riskin; Alan F. Lippman
We study the application of unequal loss protection (ULP) algorithms to motion-compensated video over lossy packet networks. In particular, we focus on streaming video applications over the Internet. The original ULP framework applies unequal amounts of forward error correction to embedded data to provide graceful degradation of quality in the presence of increasing packet loss. In this letter, we apply the ULP framework to baseline H.263, a video compression standard that targets low bit rates, by investigating reorderings of the bitstream to make it embedded. The reordering process allows a receiver to display quality video, even at the loss rates encountered in wireless transmissions and the current Internet.
data compression conference | 2002
Alexander E. Mohr
We show that the problem of optimal bit allocation among a set of independent discrete quantizers given a budget constraint is equivalent to the multiple choice knapsack problem (MCKP). This result has three implications: first, it provides a trivial proof that the problem of optimal bit allocation is NP-hard and that its related decision problem is NP-complete; second, it unifies research into solving these problems that has to date been done independently in the data compression community and the operations research community; third, many practical algorithms for approximating the optimal solution to MCKP can be used for bit allocation. We implement the GBFOS, partition-search, and Dudzinski-Walukiewicz algorithms and compare their running times for a variety of problem sizes.
international symposium on computers and communications | 2009
Sandra P. Tinta; Alexander E. Mohr; Jennifer L. Wong
Packet reordering (RO) is an Internet event that degrades the performance of both TCP and UDP-based applications. In this paper, we present an end-to-end measurement study of packet reordering of UDP traffic. The goal of our measurement study is to characterize packet reordering in the current Internet as it is reflected by PlanetLab infrastructure. Overall, our analysis shows that current UDP traffic reordering is consistent to prior 1990s studies, despite increased Internet load and technology advancements. In addition, our study adds to the previous results by identifying additional reordering characteristics. More specifically, we show that packet reordering is asymmetric as well as temporal and site-dependent, packet size does influence the likelihood of reordering, that there exists a time-of-the-day dependency, and reordering primarily exists at two timescales (a few milliseconds or multiple tens of milliseconds.)