Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ankan Saha is active.

Publication


Featured researches published by Ankan Saha.


web search and data mining | 2012

Learning evolving and emerging topics in social media: a dynamic nmf approach with temporal regularization

Ankan Saha; Vikas Sindhwani

As massive repositories of real-time human commentary, social media platforms have arguably evolved far beyond passive facilitation of online social interactions. Rapid analysis of information content in online social media streams (news articles, blogs,tweets etc.) is the need of the hour as it allows business and government bodies to understand public opinion about products and policies. In most of these settings, data points appear as a stream of high dimensional feature vectors. Guided by real-world industrial deployment scenarios, we revisit the problem of online learning of topics from streaming social media content. On one hand, the topics need to be dynamically adapted to the statistics of incoming datapoints, and on the other hand, early detection of rising new trends is important in many applications. We propose an online nonnegative matrix factorizations framework to capture the evolution and emergence of themes in unstructured text under a novel temporal regularization framework. We develop scalable optimization algorithms for our framework, propose a new set of evaluation metrics, and report promising empirical results on traditional TDT tasks as well as streaming Twitter data. Our system is able to rapidly capture emerging themes, track existing topics over time while maintaining temporal consistency and continuity in user views, and can be explicitly configured to bound the amount of information being presented to the user.


Siam Journal on Optimization | 2013

On the Nonasymptotic Convergence of Cyclic Coordinate Descent Methods

Ankan Saha; Ambuj Tewari

Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in signal processing, statistics, and machine learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method, as well as its competitive performance on


algorithmic learning theory | 2011

Accelerated training of max-margin Markov networks with kernels

Xinhua Zhang; Ankan Saha; S. V. N. Vishwanathan

\ell_1


arXiv: Learning | 2010

On the Finite Time Convergence of Cyclic Coordinate Descent Methods

Ankan Saha; Ambuj Tewari

regularized smooth optimization problems. Surprisingly, very little is known about its nonasymptotic convergence behavior on these problems. Most existing results either just prove convergence or provide asymptotic rates. We fill this gap in the literature by proving


international conference on artificial intelligence and statistics | 2011

Improved Regret Guarantees for Online Smooth Convex Optimization with Bandit Feedback

Ankan Saha; Ambuj Tewari

O(1/k)


symposium on discrete algorithms | 2011

New approximation algorithms for minimum enclosing convex shapes

Ankan Saha; S. V. N. Vishwanathan; Xinhua Zhang

convergence rates (where


neural information processing systems | 2010

Lower Bounds on Rate of Convergence of Cutting Plane Methods

Xinhua Zhang; Ankan Saha; S. V. N. Vishwanathan

k


Journal of Machine Learning Research | 2012

Smoothing multivariate performance measures

Xinhua Zhang; Ankan Saha; S. V. N. Vishwanathan

is the iteration count) for two variants of cyclic coordinate descent under an isotonicity assumption. Our analysis proceeds by comparing the objective values attained by the two variants with each other, as well as with the gradient descent algorithm. We show that the iterates generated by the cyclic coordinate descent methods remain better than those of gradient descent uniformly over time.


Archive | 2011

Dynamic NMFs with Temporal Regularization for Online Analysis of Streaming Text

Ankan Saha; Vikas Sindhwani

Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (M3N) is an effective approach. All state-of-the-art algorithms for optimizing M3N objectives take at least O(1/e) number of iterations to find an e accurate solution. [1] broke this barrier by proposing an excessive gap reduction technique (EGR) which converges in O(1/√e). iterations. However, it is restricted to Euclidean projections which consequently requires an intractable amount of computation for each iteration when applied to solve M3N. In this paper, we show that by extending EGR to Bregman projection, this faster rate of convergence can be retained, and more importantly, the updates can be performed efficiently by exploiting graphical model factorization. Further, we design a kernelized procedure which allows all computations per iteration to be performed at the same cost as the state-of-the-art approaches.


arXiv: Learning | 2010

Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies

Xinhua Zhang; Ankan Saha; S. V. N. Vishwanathan

Collaboration


Dive into the Ankan Saha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge