Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rishabh K. Iyer is active.

Publication


Featured researches published by Rishabh K. Iyer.


international joint conference on natural language processing | 2015

Summarization of Multi-Document Topic Hierarchies using Submodular Mixtures

Ramakrishna Bairi; Rishabh K. Iyer; Ganesh Ramakrishnan; Jeff A. Bilmes

We study the problem of summarizing DAG-structured topic hierarchies over a given set of documents. Example applications include automatically generating Wikipedia disambiguation pages for a set of articles, and generating candidate multi-labels for preparing machine learning datasets (e.g., for text classification, functional genomics, and image classification). Unlike previous work, which focuses on clustering the set of documents using the topic hierarchy as features, we directly pose the problem as a submodular optimization problem on a topic hierarchy using the documents as features. Desirable properties of the chosen topics include document coverage, specificity, topic diversity, and topic homogeneity, each of which, we show, is naturally modeled by a submodular function. Other information, provided say by unsupervised approaches such as LDA and its variants, can also be utilized by defining a submodular function that expresses coherence between the chosen topics and this information. We use a large-margin framework to learn convex mixtures over the set of submodular components. We empirically evaluate our method on the problem of automatically generating Wikipedia disambiguation pages using human generated clusterings as ground truth. We find that our framework improves upon several baselines according to a variety of standard evaluation metrics including the Jaccard Index, F1 score and NMI, and moreover, can be scaled to extremely large scale problems.


Computer Speech & Language | 2017

SVitchboard-II and FiSVer-I: Crafting high quality and low complexity conversational english speech corpora using submodular function optimization

Yuzong Liu; Rishabh K. Iyer; Katrin Kirchhoff; Jeff A. Bilmes

Abstract We introduce a set of benchmark corpora of conversational English speech derived from the Switchboard-I and Fisher datasets. Traditional automatic speech recognition (ASR) research requires considerable computational resources and has slow experimental turnaround times. Our goal is to introduce these new datasets to researchers in the ASR and machine learning communities in order to facilitate the development of novel speech recognition techniques on smaller but still acoustically rich, diverse, and hence interesting corpora. We select these corpora to maximize an acoustic quality criterion while limiting the vocabulary size (from 10 words up to 10,000 words), where both “acoustic quality” and vocabulary size are adeptly measured via various submodular functions. We also survey numerous submodular functions that could be useful to measure both “acoustic quality” and “corpus complexity” and offer guidelines on when and why a scientist may wish use to one vs. another. The corpora selection process itself is naturally performed using various state-of-the-art submodular function optimization procedures, including submodular level-set constrained submodular optimization (SCSC/SCSK), difference-of-submodular (DS) optimization, and unconstrained submodular minimization (SFM), all of which are fully defined herein. While the focus of this paper is on the resultant speech corpora, and the survey of possible objectives, a consequence of the paper is a thorough empirical comparison of the relative merits of these modern submodular optimization procedures. We provide baseline word recognition results on all of the resultant speech corpora for both Gaussian mixture model (GMM) and deep neural network (DNN)-based systems, and we have released all of the corpora definitions and Kaldi training recipes for free in the public domain.


neural information processing systems | 2013

Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

Rishabh K. Iyer; Jeff A. Bilmes


neural information processing systems | 2014

Learning Mixtures of Submodular Functions for Image Collection Summarization

Sebastian Tschiatschek; Rishabh K. Iyer; Haochen Wei; Jeff A. Bilmes


international conference on machine learning | 2013

Fast Semidifferential-based Submodular Function Optimization

Rishabh K. Iyer; Stefanie Jegelka; Jeff A. Bilmes


uncertainty in artificial intelligence | 2012

Algorithms for approximate minimization of the difference between submodular functions, with applications

Rishabh K. Iyer; Jeff A. Bilmes


neural information processing systems | 2013

Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

Rishabh K. Iyer; Stefanie Jegelka; Jeff A. Bilmes


international conference on machine learning | 2015

Submodularity in Data Subset Selection and Active Learning

Kai Wei; Rishabh K. Iyer; Jeff A. Bilmes


international conference on machine learning | 2014

Fast Multi-stage Submodular Maximization

Kai Wei; Rishabh K. Iyer; Jeff A. Bilmes


neural information processing systems | 2012

Submodular-Bregman and the Lovász-Bregman Divergences with Applications

Rishabh K. Iyer; Jeff A. Bilmes

Collaboration


Dive into the Rishabh K. Iyer's collaboration.

Top Co-Authors

Avatar

Jeff A. Bilmes

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Kai Wei

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Stefanie Jegelka

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ganesh Ramakrishnan

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Wenruo Bai

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Subhasis Chaudhuri

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shengjie Wang

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge