Ashkan Jafarpour
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashkan Jafarpour.
international symposium on information theory | 2014
Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh
We consider the problems of sorting and maximum-selection of n elements using adversarial comparators. We derive a maximum-selection algorithm that uses 8n comparisons in expectation, and a sorting algorithm that uses 4n log2 n comparisons in expectation. Both are tight up to a constant factor. Our adversarial-comparator model was motivated by the practically important problem of density-estimation, where we observe samples from an unknown distribution, and try to determine which of n known distributions is closest to it. Existing algorithms run in Ω(n2) time. Applying the adversarial comparator results, we derive a density-estimation algorithm that runs in only O(n) time.
international symposium on information theory | 2014
Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh
Outlier detection is the problem of finding a few different distributions in a set of mostly identical ones. Closeness testing is the problem of deciding whether two distributions are identical or different. We relate the two problems, construct a sub-linear generalized closeness test for unequal sample lengths, and use this result to derive a sub-linear universal outlier detector. We also lower bound the sample complexity of both problems.
international symposium on information theory | 2013
Jayadev Acharya; Hirakendu Das; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh
Over the past decade, several papers, e.g., [1-7] and references therein, have considered universal compression of sources over large alphabets, often using patterns to avoid infinite redundancy. Improving on previous results, we prove tight bounds on expected- and worst-case pattern redundancy, in particular closing a decade-long gap and showing that the worst-case pattern redundancy of i.i.d. distributions is Θ(n1/3)†.
international symposium on information theory | 2014
Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh
We consider universal compression of n samples drawn independently according to a monotone or m-modal distribution over k elements. We show that for all these distributions, the per-sample redundancy diminishes to 0 if k = exp(o(n/log n)) and is at least a constant if k = exp(Ω(n)).
allerton conference on communication, control, and computing | 2011
Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky
The problem of finding optimal querying policy, for expected query complexity of symmetric boolean threshold functions was solved in [1] in the context of collocated networks. In this paper, instead of considering the optimal policy to compute the functions, we define the problem of verification of the function value. We use this idea to provide a simpler proof of the optimal querying policy for threshold functions. The method is more generic and is extended to delta and some other symmetric functions. We also provide some partial results for interval functions and finally address a question posed in [1]. Recently we have extended these results to any symmetric function of boolean inputs, which we mention at the end.
international symposium on information theory | 2014
Jayadev Acharya; Ashkan Jafarpour; Alon Orlitsky; Ananda Theertha Suresh
Poisson sampling is a method for eliminating dependence among symbols in a random sequence. It helps improve algorithm design, strengthen bounds, and simplify proofs. We relate the redundancy of fixed-length and Poisson-sampled sequences, use this result to derive a simple formula for the redundancy of general envelope classes, and apply this formula to obtain simple and tight bounds on the redundancy of power-law and exponential envelope classes, in particular answering a question posed in [1] about power-law envelopes.
international symposium on information theory | 2012
Hirakendu Das; Ashkan Jafarpour; Alon Orlitsky; Shengjun Pan; Ananda Theertha Suresh
In the query model of multi-variate function computation, the values of the variables are queried sequentially, in an order that may depend on previously revealed values, until the functions value can be determined. The functions computation query complexity is the lowest expected number of queries required by any query order. Instead of computation, it is often easier to consider verification, where the value of the function is given and the queries aim to verify it. The lowest expected number of queries necessary is the functions verification query complexity. We show that for all symmetric functions of independent binary random variables, the computation and verification complexities coincide. This provides a simple method for finding the query complexity and the optimal query order for computing many functions. We also show that if the symmetry condition is removed, there are functions whose verification complexity is strictly lower than their computation complexity, and mention that the same holds when the independence or binary conditions are removed.
international symposium on information theory | 2015
Moein Falahatgar; Ashkan Jafarpour; Alon Orlitsky; Venkatadheeraj Pichapati; Ananda Theertha Suresh
English words and the outputs of many other natural processes are well-known to follow a Zipf distribution. Yet this thoroughly-established property has never been shown to help compress or predict these important processes. We show that the expected redundancy of Zipf distributions of order α > 1 is roughly the 1/α power of the expected redundancy of unrestricted distributions. Hence for these orders, Zipf distributions can be better compressed and predicted than was previously known. Unlike the expected case, we show that worst-case redundancy is roughly the same for Zipf and for unrestricted distributions. Hence Zipf distributions have significantly different worst-case and expected redundancies, making them the first natural distribution class shown to have such a difference.
conference on learning theory | 2011
Jayadev Acharya; Hirakendu Das; Ashkan Jafarpour; Alon Orlitsky; Shengjun Pan
neural information processing systems | 2014
Ananda Theertha Suresh; Alon Orlitsky; Jayadev Acharya; Ashkan Jafarpour