Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Varun Kanade is active.

Publication


Featured researches published by Varun Kanade.


international parallel and distributed processing symposium | 2007

SWARM: A Parallel Programming Framework for Multicore Processors

David A. Bader; Varun Kanade; Kamesh Madduri

Due to fundamental physical limitations and power constraints, we are witnessing a radical change in commodity microprocessor architectures to multicore designs. Continued performance on multicore processors now requires the exploitation of concurrency at the algorithmic level. In this paper, we identify key issues in algorithm design for multicore processors and propose a computational model for these systems. We introduce SWARM (software and algorithms for running on multi-core), a portable open-source parallel library of basic primitives that fully exploit multicore processors. Using this framework, we have implemented efficient parallel algorithms for important primitive operations such as prefix-sums, pointer-jumping, symmetry breaking, and list ranking; for combinatorial problems such as sorting and selection; for parallel graph theoretic algorithms such as spanning tree, minimum spanning tree, graph decomposition, and tree contraction; and for computational genomics applications such as maximum parsimony. The main contributions of this paper are the design of the SWARM multicore framework, the presentation of a multicore algorithmic model, and validation results for this model. SWARM is freely available as open-source from http://multicore-swarm.sourceforge.net/.


conference on innovations in theoretical computer science | 2012

Learning hurdles for sleeping experts

Varun Kanade; Thomas Steinke

We study the online decision problem where the set of available actions varies over time, also called the sleeping experts problem. We consider the setting where the performance comparison is made with respect to the best ordering of actions in hindsight. In this paper, both the payoff function and the availability of actions is adversarial. Kleinberg et al. (2008) gave a computationally efficient no-regret algorithm in the setting where payoffs are stochastic. Kanade et al. (2009) gave an efficient no-regret algorithm in the setting where action availability is stochastic. However, the question of whether there exists a computationally efficient no-regret algorithm in the adversarial setting was posed as an open problem by Kleinberg et al. (2008). We show that such an algorithm would imply an algorithm for PAC learning DNF, a long standing important open problem. We also show that a related problem, the gambling problem, posed as an open problem by Abernethy (2010) is related to agnostically learning halfspaces, albeit under restricted distributions.


Journal of Computer and System Sciences | 2012

Reliable agnostic learning

Adam Tauman Kalai; Varun Kanade; Yishay Mansour

It is well known that in many applications erroneous predictions of one type or another must be avoided. In some applications, like spam detection, false positive errors are serious problems. In other applications, like medical diagnosis, abstaining from making a prediction may be more desirable than making an incorrect prediction. In this paper we consider different types of reliable classifiers suited for such situations. We formalize the notion and study properties of reliable classifiers in the spirit of agnostic learning (Haussler, 1992; Kearns, Schapire, and Sellie, 1994), a PAC-like model where no assumption is made on the function being learned. We then give two algorithms for reliable agnostic learning under natural distributions. The first reliably learns DNFs with no false positives using membership queries. The second reliably learns halfspaces from random examples with no false positives or false negatives, but the classifier sometimes abstains from making predictions.


foundations of computer science | 2011

Evolution with Recombination

Varun Kanade

Valiant (2007) introduced a computational model of evolution and suggested that Darwinian evolution be studied in the framework of computational learning theory. Valiant describes evolution as a restricted form of learning where exploration is limited to a set of possible mutations and feedback is received through the survival of the fittest mutation. In subsequent work Feldman (2008) showed that evolvability in Valiants model is equivalent to learning in the correlational statistical query (CSQ) model. We extend Valiants model to include genetic recombination and show that in certain cases, recombination can significantly speed-up the process of evolution in terms of the number of generations, though at the expense of population size. This follows via a reduction from parallel-CSQ algorithms to evolution with recombination. This gives an exponential speed-up (in terms of the number of generations) over previous known results for evolving conjunctions and half spaces with respect to restricted distributions.


IEEE Transactions on Information Theory | 2016

Global and Local Information in Clustering Labeled Block Models

Varun Kanade; Elchanan Mossel; Tselil Schramm

The stochastic block model is a classical cluster-exhibiting random graph model that has been widely studied in statistics, physics, and computer science. In its simplest form, the model is a random graph with two equal-sized clusters, with intracluster edge probability


algorithmic learning theory | 2015

Learning with a Drifting Target Concept

Steve Hanneke; Varun Kanade; Liu Yang

p


conference on innovations in theoretical computer science | 2014

Attribute-efficient evolvability of linear functions

Elaine Angelino; Varun Kanade

, and intercluster edge probability


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2016

Stable Matching with Evolving Preferences.

Varun Kanade; Nikos Leonardos; Frédéric Magniez

q


principles of distributed computing | 2017

Brief Announcement: How Large is your Graph?

Varun Kanade; Frederik Mallmann-Trenn; Victor Verdugo

. We focus on the sparse case, i.e.,


international conference on distributed computing | 2017

How Large Is Your Graph

Varun Kanade; Frederik Mallmann-Trenn; Victor Verdugo

p, q = O(1/n)

Collaboration


Dive into the Varun Kanade's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt J. Kusner

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Pranjal Awasthi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge