Ambuj Tewari
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ambuj Tewari.
conference on learning theory | 2007
Ambuj Tewari; Peter L. Bartlett
Binary classification is a well studied special case of the classification problem. Statistical properties of binary classifiers, such as consistency, have been investigated in a variety of settings. Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that one can lose consistency in generalizing a binary classification method to deal with multiple classes. We study a rich family of multiclass methods and provide a necessary and sufficient condition for their consistency. We illustrate our approach by applying it to some multiclass methods proposed in the literature.
international conference on machine learning | 2009
Shai Shalev-Shwartz; Ambuj Tewari
We describe and analyze two stochastic methods for l1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.
conference on information and knowledge management | 2011
Kai-Yang Chiang; Nagarajan Natarajan; Ambuj Tewari; Inderjit S. Dhillon
We consider the problem of link prediction in signed networks. Such networks arise on the web in a variety of ways when users can implicitly or explicitly tag their relationship with other users as positive or negative. The signed links thus created reflect social attitudes of the users towards each other in terms of friendship or trust. Our first contribution is to show how any quantitative measure of social imbalance in a network can be used to derive a link prediction algorithm. Our framework allows us to reinterpret some existing algorithms as well as derive new ones. Second, we extend the approach of Leskovec et al. (2010) by presenting a supervised machine learning based link prediction method that uses features derived from longer cycles in the network. The supervised method outperforms all previous approaches on 3 networks drawn from sources such as Epinions, Slashdot and Wikipedia. The supervised approach easily scales to these networks, the largest of which has 132k nodes and 841k edges. Most real-world networks have an overwhelmingly large proportion of positive edges and it is therefore easy to get a high overall accuracy at the cost of a high false positive rate. We see that our supervised method not only achieves good accuracy for sign prediction but is also especially effective in lowering the false positive rate.
Siam Journal on Optimization | 2013
Ankan Saha; Ambuj Tewari
Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in signal processing, statistics, and machine learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method, as well as its competitive performance on
conference on learning theory | 2007
Peter L. Bartlett; Ambuj Tewari
\ell_1
Annals of Behavioral Medicine | 2016
Inbal Nahum-Shani; Shawna N. Smith; Bonnie Spring; Linda M. Collins; Katie Witkiewitz; Ambuj Tewari; Susan A. Murphy
regularized smooth optimization problems. Surprisingly, very little is known about its nonasymptotic convergence behavior on these problems. Most existing results either just prove convergence or provide asymptotic rates. We fill this gap in the literature by proving
PLOS ONE | 2013
U. Martin Singh-Blom; Nagarajan Natarajan; Ambuj Tewari; John O. Woods; Inderjit S. Dhillon; Edward M. Marcotte
O(1/k)
international conference on machine learning | 2009
Shai Shalev-Shwartz; Ambuj Tewari
convergence rates (where
ieee international conference on high performance computing data and analytics | 2002
Ambuj Tewari; Utkarsh Srivastava; Phalguni Gupta
k
conference on learning theory | 2007
Ambuj Tewari; Peter L. Bartlett
is the iteration count) for two variants of cyclic coordinate descent under an isotonicity assumption. Our analysis proceeds by comparing the objective values attained by the two variants with each other, as well as with the gradient descent algorithm. We show that the iterates generated by the cyclic coordinate descent methods remain better than those of gradient descent uniformly over time.