Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atri Rudra is active.

Publication


Featured researches published by Atri Rudra.


cryptographic hardware and embedded systems | 2001

Efficient Rijndael Encryption Implementation with Composite Field Arithmetic

Atri Rudra; Pradeep Dubey; Charanjit S. Jutla; Vijay Kumar; Josyula R. Rao; Pankaj Rohatgi

We explore the use of subfield arithmetic for efficient implementations of Galois Field arithmetic especially in the context of the Rijndael block cipher. Our technique involves mapping field elements to a composite field representation. We describe how to select a representation which minimizes the computation cost of the relevant arithmetic, taking into account the cost of the mapping as well. Our method results in a very compact and fast gate circuit for Rijndael encryption. In conjunction with bit-slicing techniques applied to newly proposed parallelizable modes of operation, our circuit leads to a high-performance software implementation for Rijndael encryption which offers significant speedup compared to previously reported implementations.


symposium on discrete algorithms | 2006

Ordering by weighted number of wins gives a good ranking for weighted tournaments

Don Coppersmith; Lisa Fleischer; Atri Rudra

We consider the following simple algorithm for feedback arc set problem in weighted tournaments --- order the vertices by their weighted indegrees. We show that this algorithm has an approximation guarantee of 5 if the weights satisfy probability constraints (for any pair of vertices u and v, w uv + w vu = 1). Special cases of feedback arc set problem in such weighted tournaments include feedback arc set problem in unweighted tournaments and rank aggregation. Finally, for any constant e > 0, we exhibit an infinite family of (unweighted) tournaments for which the above algorithm (irrespective of how ties are broken) has an approximation ratio of 5 - e.


IEEE Transactions on Information Theory | 2008

Explicit Codes Achieving List Decoding Capacity: Error-Correction With Optimal Redundancy

Venkatesan Guruswami; Atri Rudra

In this paper, we present error-correcting codes that achieve the information-theoretically best possible tradeoff between the rate and error-correction radius. Specifically, for every 0 < R < 1 and epsiv < 0, we present an explicit construction of error-correcting codes of rate that can be list decoded in polynomial time up to a fraction (1- R - epsiv) of worst-case errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded Reed-Solomon codes, which are in fact exactly Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on epsiv) using existing ideas concerning ldquolist recoveryrdquo and expander-based codes. Concatenating the folded RS codes with suitable inner codes, we get binary codes that can be efficiently decoded up to twice the radius achieved by the standard GMD decoding.


international colloquium on automata languages and programming | 2009

Approximating Matches Made in Heaven

Ning Chen; Nicole Immorlica; Anna R. Karlin; Mohammad Mahdian; Atri Rudra

Motivated by applications in online dating and kidney exchange, we study a stochastic matching problem in which we have a random graph G given by a node set V and probabilities p (i ,j ) on all pairs i ,j *** V representing the probability that edge (i ,j ) exists. Additionally, each node has an integer weight t (i ) called its patience parameter. Nodes represent agents in a matching market with dichotomous preferences, i.e., each agent finds every other agent either acceptable or unacceptable and is indifferent between all acceptable agents. The goal is to maximize the welfare, or produce a matching between acceptable agents of maximum size. Preferences must be solicited based on probabilistic information represented by p (i ,j ), and agent i can be asked at most t (i ) questions regarding his or her preferences. A stochastic matching algorithm iteratively probes pairs of nodes i and j with positive patience parameters. With probability p (i ,j ), an edge exists and the nodes are irrevocably matched. With probability 1 *** p (i ,j ), the edge does not exist and the patience parameters of the nodes are decremented. We give a simple greedy strategy for selecting probes which produces a matching whose cardinality is, in expectation, at least a quarter of the size of this optimal algorithms matching. We additionally show that variants of our algorithm (and our analysis) can handle more complicated constraints, such as a limit on the maximum number of rounds, or the number of pairs probed in each round.


international conference on management of data | 2014

Skew strikes back: new developments in the theory of join algorithms

Hung Q. Ngo; Christopher Ré; Atri Rudra

Evaluating the relational join is one of the central algorithmic and most well-studied problems in database systems. A staggering number of variants have been considered including Block-Nested loop join, Hash-Join, Grace, Sort-merge (see Grafe [17] for a survey, and [4, 7, 24] for discussions of more modern issues). Commercial database engines use finely tuned join heuristics that take into account a wide variety of factors including the selectivity of various predicates, memory, IO, etc. This study of join queries notwithstanding, the textbook description of join processing is suboptimal. This survey describes recent results on join algorithms that have provable worst-case optimality runtime guarantees. We survey recent work and provide a simpler and unified description of these algorithms that we hope is useful for theory-minded readers, algorithm designers, and systems implementors. Much of this progress can be understood by thinking about a simple join evaluation problem that we illustrate with the so-called triangle query, a query that has become increasingly popular in the last decade with the advent of social networks, biological motifs, and graph databases [36, 37]


Theoretical Computer Science | 2004

Online learning in online auctions

Avrim Blum; Vijay Kumar; Atri Rudra; Felix F. Wu

We consider the problem of revenue maximization in online auctions, that is, auctions in which bids are received and dealt with one-by-one. In this paper, we demonstrate that results from online learning can be usefully applied in this context, and we derive a new auction for digital goods that achieves a constant competitive ratio with respect to the optimal (offline) fixed price revenue. This substantially improves upon the best previously known competitive ratio for this problem of O(exp(√log log h)). We also apply our techniques to the related problem of designing online posted price mechanisms, in which the seller declares a price for each of a series of buyers, and each buyer either accepts or rejects the good at that price. Despite the relative lack of information in this setting, we show that online learning techniques can be used to obtain results for online posted price mechanisms which are similar to those obtained for online auctions.


symposium on the theory of computing | 2007

Lower bounds for randomized read/write stream algorithms

Paul Beame; T. S. Jayram; Atri Rudra

Motivated by the capabilities of modern storage architectures, we consider the following generalization of the data stream model where the algorithm has sequential access to multiple streams. Unlike the data stream model, where the stream is read only, in this new model (introduced in [8,9]) the algorithms can also write onto streams. There is no limit on the size of the streams but the number of passes made on the streams is restricted. On the other hand, the amount of internal memory used by the algorithm is scarce, similar to data stream model. We resolve the main open problem in [7] of proving lower bounds in this model for algorithms that are allowed to have 2-sided error. Previously, such lower bounds were shown only for deterministic and 1-sided error randomized algorithms [9,7]. We consider the classical set disjointness problemthat has proved to be invaluable for deriving lower bounds for many other problems involving data streams and other randomized models of computation. For this problem, we show a near-linear lower bound on the size of the internal memory used by a randomized algorithm with 2-sided error that is allowed to have o(log N/log log N) passes over the streams. This bound is almost optimal sincethere is a simple algorithm that can solve this problem using logarithmic memory if the number of passes over the streams. Applications include near-linear lower bounds onthe internal memory for well-known problems in the literature:(1) approximately counting the number of distinct elements in the input (F0);(2) approximating the frequency of the mod of an input sequence(F*∞);(3) computing the join of two relations; and (4) deciding if some node of an XML document matches an XQuery (or XPath) query. Our techniques involve a novel direct-sum type of argument that yields lower bounds for many other problems. Our results asymptotically improve previously known bounds for any problem even in deterministic and 1-sided error models of computation.


IEEE Transactions on Information Theory | 2006

Limits to List Decoding Reed&#8211;Solomon Codes

Venkatesan Guruswami; Atri Rudra

In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes. 1) Given n distinct elements alpha1,...,alphan from a field F, and n subsets S1,...,Sn of F, each of size at most l, the list decoding algorithm of Guruswami and Sudan can in polynomial time output all polynomials p of degree at most k that satisfy p(alphai)isinSi for every i, as long as l 0 (agreement of k is trivial to achieve). Such a bound was known earlier only for a nonexplicit center. Finding explicit bad list decoding configurations is of significant interest-for example, the best known rate versus distance tradeoff, due to Xing, is based on a bad list decoding configuration for algebraic-geometric codes, which is unfortunately not explicitly known


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2005

Tolerant locally testable codes

Venkatesan Guruswami; Atri Rudra

An error-correcting code is said to be locally testable if it has an efficient spot-checking procedure that can distinguish codewords from strings that are far from every codeword, looking at very few locations of the input in doing so. Locally testable codes (LTCs) have generated a lot of interest over the years, in large part due to their connection to Probabilistically checkable proofs (PCPs). The ability to correct errors that occur during transmission is one of the big advantages of using a code. Hence, from a coding-theoretic angle, local testing is potentially more useful if in addition to accepting codewords, it also accepts strings that are close to a codeword (in contrast, local testers can have arbitrary behavior on such strings, which potentially annuls the benefits of error-correction). This would imply that when the tester accepts, one can follow-up the testing with a (more expensive) decoding procedure to correct the errors and recover the transmitted codeword, while if the tester rejects, we can save the effort of running the more expensive decoding algorithm. In this work, we define such testers, which we call tolerant testers following some recent work in property testing [13]. We revisit some recent constructions of LTCs and show how one can make them locally testable in a tolerant sense. While we do not optimize the parameters, the main message from our work is that there are explicit tolerant LTCs with similar parameters to LTCs.


Journal of Exposure Science and Environmental Epidemiology | 2016

Using smartphones to collect time-activity data for long-term personal-level air pollution exposure assessment.

Mark L. Glasgow; Carole B. Rudra; Eun Hye Yoo; Murat Demirbas; Joel Merriman; Pramod Nayak; Christina R Crabtree-Ide; Adam A. Szpiro; Atri Rudra; Jean Wactawski-Wende; Lina Mu

Because of the spatiotemporal variability of people and air pollutants within cities, it is important to account for a person’s movements over time when estimating personal air pollution exposure. This study aimed to examine the feasibility of using smartphones to collect personal-level time–activity data. Using Skyhook Wireless’s hybrid geolocation module, we developed “Apolux” (Air, Pollution, Exposure), an AndroidTM smartphone application designed to track participants’ location in 5-min intervals for 3 months. From 42 participants, we compared Apolux data with contemporaneous data from two self-reported, 24-h time–activity diaries. About three-fourths of measurements were collected within 5 min of each other (mean=74.14%), and 79% of participants reporting constantly powered-on smartphones (n=38) had a daily average data collection frequency of <10 min. Apolux’s degree of temporal resolution varied across manufacturers, mobile networks, and the time of day that data collection occurred. The discrepancy between diary points and corresponding Apolux data was 342.3 m (Euclidian distance) and varied across mobile networks. This study’s high compliance and feasibility for data collection demonstrates the potential for integrating smartphone-based time–activity data into long-term and large-scale air pollution exposure studies.

Collaboration


Dive into the Atri Rudra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anupam Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikhil Bansal

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge