Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ananda Theertha Suresh is active.

Publication


Featured researches published by Ananda Theertha Suresh.


information theory workshop | 2010

Strong secrecy for erasure wiretap channels

Ananda Theertha Suresh; Arunkumar Subramanian; Andrew Thangaraj; Matthieu R. Bloch; Steven W. McLaughlin

We show that duals of certain low-density parity-check (LDPC) codes, when used in a standard coset coding scheme, provide strong secrecy over the binary erasure wiretap channel (BEWC). This result hinges on a stopping set analysis of ensembles of LDPC codes with block length n and girth ⋛ for some ⋛. We show that if the minimum left degree of the ensemble is l<inf>min</inf>, the expected probability of block error is O(1/n⌈<sup>l</sup> min<sup>k/2</sup>⌉ −k) when the erasure probability ∊ < ∊<inf>ef</inf>, where ∊<inf>ef</inf> depends on the degree distribution of the ensemble. As long as l<inf>min</inf> and k > 2, the dual of this LDPC code provides strong secrecy over a BEWC of erasure probability greater than 1–∊<inf>ef</inf>.


international symposium on information theory | 2014

Sorting with adversarial comparators and application to density estimation

Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh

We consider the problems of sorting and maximum-selection of n elements using adversarial comparators. We derive a maximum-selection algorithm that uses 8n comparisons in expectation, and a sorting algorithm that uses 4n log2 n comparisons in expectation. Both are tight up to a constant factor. Our adversarial-comparator model was motivated by the practically important problem of density-estimation, where we observe samples from an unknown distribution, and try to determine which of n known distributions is closest to it. Existing algorithms run in Ω(n2) time. Applying the adversarial comparator results, we derive a density-estimation algorithm that runs in only O(n) time.


international symposium on turbo codes and iterative information processing | 2010

Strong and weak secrecy in wiretap channels

Arunkumar Subramanian; Ananda Theertha Suresh; Safitha Raj; Andrew Thangaraj; Matthieu R. Bloch; Steven W. McLaughlin

In the wiretap channel model, symbols transmitted through a main channel to a legitimate receiver are observed by an eavesdropper across a wiretappers channel. The goal of coding for wiretap channels is to facilitate error-free decoding across the main channel, while ensuring zero information transfer across the wiretappers channel. Strong secrecy requires the total information transfer to the eavesdropper to tend to zero, while weak secrecy requires the per-symbol information transfer to go to zero. In this paper, we will consider coding methods for binary wiretap channels with a noiseless main channel and a BEC or a BSC wiretappers channel. We will provide conditions and codes that achieve strong and weak secrecy for the BEC case. For the BSC case, we will discuss some existing coding methods and develop additional criteria for secrecy.


international symposium on information theory | 2014

Sublinear algorithms for outlier detection and generalized closeness testing

Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh

Outlier detection is the problem of finding a few different distributions in a set of mostly identical ones. Closeness testing is the problem of deciding whether two distributions are identical or different. We relate the two problems, construct a sub-linear generalized closeness test for unequal sample lengths, and use this result to derive a sub-linear universal outlier detector. We also lower bound the sample complexity of both problems.


international symposium on information theory | 2013

Tight bounds for universal compression of large alphabets

Jayadev Acharya; Hirakendu Das; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh

Over the past decade, several papers, e.g., [1-7] and references therein, have considered universal compression of sources over large alphabets, often using patterns to avoid infinite redundancy. Improving on previous results, we prove tight bounds on expected- and worst-case pattern redundancy, in particular closing a decade-long gap and showing that the worst-case pattern redundancy of i.i.d. distributions is Θ(n1/3)†.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Optimal prediction of the number of unseen species

Alon Orlitsky; Ananda Theertha Suresh; Yihong Wu

Significance Many scientific applications ranging from ecology to genetics use a small sample to estimate the number of distinct elements, known as ”species,” in a population. Classical results have shown that n samples can be used to estimate the number of species that would be observed if the sample size were doubled to 2n. We obtain a class of simple algorithms that extend the estimate all the way to n log n samples, and we show that this is also the largest possible estimation range. Therefore, statistically speaking, the proverbial bird in the hand is worth log n in the bush. The proposed estimators outperform existing ones on several synthetic and real datasets collected in various disciplines. Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42−58], uses n samples to predict the number U of hitherto unseen species that would be observed if t⋅n new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45−63] constructed an intriguing estimator that predicts U for all t≤1. Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435−447] proposed a modification that empirically predicts U even for some t>1, but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to t∝log⁡n. We also show that this range is the best possible and that the estimator’s mean-square error is near optimal for any t. Our approach yields a provable guarantee for the Efron−Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product.


international symposium on information theory | 2015

Automata and graph compression

Mehryar Mohri; Michael Riley; Ananda Theertha Suresh

We present a theoretical framework for the compression of automata, which are widely used representations in speech processing, natural language processing and many other tasks. As a corollary, our framework further covers graph compression. We introduce a probabilistic process of graph and automata generation that is similar to stationary ergodic processes and that covers real-world phenomena. We also introduce a universal compression scheme LZA for this probabilistic model and show that LZA significantly outperforms other compression techniques such as gzip and the UNIX compress command for several synthetic and real data sets.


international symposium on information theory | 2014

Efficient compression of monotone and m-modal distributions

Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh

We consider universal compression of n samples drawn independently according to a monotone or m-modal distribution over k elements. We show that for all these distributions, the per-sample redundancy diminishes to 0 if k = exp(o(n/log n)) and is at least a constant if k = exp(Ω(n)).


IEEE Transactions on Communications | 2013

Interplay Between Optimal Selection Scheme, Selection Criterion, and Discrete Rate Adaptation in Opportunistic Wireless Systems

Neelesh B. Mehta; Rajat Talak; Ananda Theertha Suresh

An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected nodes channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.


international symposium on information theory | 2016

Estimating the number of defectives with group testing

Moein Falahatgar; Ashkan Jafarpour; Alon Orlitsky; Venkatadheeraj Pichapati; Ananda Theertha Suresh

Estimating the number of defective elements of a set has various biological applications including estimating the prevalence of a disease or disorder. Group testing has been shown to be more efficient than scrutinizing each element separately for defectiveness. In group testing, we query a subset of elements and the result of the query will be defective if the subset contains at least one defective element. We present an adaptive, randomized group-testing algorithm to estimate the number of defective elements with near-optimal number of queries. Our algorithm uses at most 2 log log d + O(1/δ2 log 1/ε) queries and estimates the number of defective elements d up to a multiplicative factor of 1 ± δ, with error probability ≤ ε. Also, we show an information-theoretic lower bound (1 - ε) log log d - 1 on the necessary number of queries any adaptive algorithm makes to estimate the number of defective elements for constant δ.

Collaboration


Dive into the Ananda Theertha Suresh's collaboration.

Top Co-Authors

Avatar

Alon Orlitsky

University of California

View shared research outputs
Top Co-Authors

Avatar

Jayadev Acharya

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirakendu Das

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge