Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bhuwan Dhingra is active.

Publication


Featured researches published by Bhuwan Dhingra.


meeting of the association for computational linguistics | 2017

Gated-Attention Readers for Text Comprehension

Bhuwan Dhingra; Hanxiao Liu; Zhilin Yang; William W. Cohen; Ruslan Salakhutdinov

In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. The GA Reader obtains state-of-the-art results on three benchmarks for this task--the CNN \& Daily Mail news stories and the Who Did What dataset. The effectiveness of multiplicative interaction is demonstrated by an ablation study, and by comparing to alternative compositional operators for implementing the gated-attention. The code is available at this https URL


meeting of the association for computational linguistics | 2016

Tweet2Vec: Character-Based Distributed Representations for Social Media

Bhuwan Dhingra; Zhong Zhou; Dylan Fitzpatrick; Michael Muehl; William W. Cohen

Text from social media provides a set of challenges that can cause traditional NLP approaches to fail. Informal language, spelling errors, abbreviations, and special characters are all commonplace in these posts, leading to a prohibitively large vocabulary size for word-level approaches. We propose a character composition model, tweet2vec, which finds vector-space representations of whole tweets by learning complex, non-local dependencies in character sequences. The proposed model outperforms a word-level baseline at predicting user-annotated hashtags associated with the posts, doing significantly better when the input contains many out-of-vocabulary words or unusual character sequences. Our tweet2vec encoder is publicly available.


students conference on engineering and systems | 2012

Stock market prediction using Hidden Markov Models

Aditya Gupta; Bhuwan Dhingra

Stock market prediction is a classic problem which has been analyzed extensively using tools and techniques of Machine Learning. Interesting properties which make this modeling non-trivial is the time dependence, volatility and other similar complex dependencies of this problem. To incorporate these, Hidden Markov Models (HMMs) have recently been applied to forecast and predict the stock market. We present the Maximum a Posteriori HMM approach for forecasting stock values for the next day given historical data. In our approach, we consider the fractional change in Stock value and the intra-day high and low values of the stock to train the continuous HMM. This HMM is then used to make a Maximum a Posteriori decision over all the possible stock values for the next day. We test our approach on several stocks, and compare the performance to some of the existing methods using HMMs and Artificial Neural Networks using Mean Absolute Percentage Error (MAPE).


international joint conference on artificial intelligence | 2017

Using Graphs of Classifiers to Impose Declarative Constraints on Semi-supervised Learning

Lidong Bing; William W. Cohen; Bhuwan Dhingra

We propose a general approach to modeling semi-supervised learning (SSL) algorithms. Specifically, we present a declarative language for modeling both traditional supervised classification tasks and many SSL heuristics, including both well-known heuristics such as co-training and novel domain-specific heuristics. In addition to representing individual SSL heuristics, we show that multiple heuristics can be automatically combined using Bayesian optimization methods. We experiment with two classes of tasks, link-based text classification and relation extraction. We show modest improvements on well-studied link-based classification benchmarks, and state-of-the-art results on relation-extraction tasks for two realistic domains.


north american chapter of the association for computational linguistics | 2016

Using Graphs of Classifiers to Impose Constraints on Semi-supervised Relation Extraction.

Lidong Bing; William W. Cohen; Bhuwan Dhingra; Richard C. Wang

We propose a general approach to modeling semi-supervised learning constraints on unlabeled data. Both traditional supervised classification tasks and many natural semisupervised learning heuristics can be approximated by specifying the desired outcome of walks through a graph of classifiers. We demonstrate the modeling capability of this approach in the task of relation extraction, and experimental results show that the modeled constraints achieve better performance as expected.


meeting of the association for computational linguistics | 2017

Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access

Bhuwan Dhingra; Lihong Li; Xiujun Li; Jianfeng Gao; Yun-Nung Chen; Faisal Ahmed; Li Deng


Archive | 2012

Detecting that a mobile device is riding with a vehicle

Leonard Henry Grokop; Bhuwan Dhingra


international conference on learning representations | 2017

Words or Characters? Fine-grained Gating for Reading Comprehension

Zhilin Yang; Bhuwan Dhingra; Ye Yuan; Junjie Hu; William W. Cohen; Ruslan Salakhutdinov


arXiv: Learning | 2016

A User Simulator for Task-Completion Dialogues

Xiujun Li; Zachary C. Lipton; Bhuwan Dhingra; Lihong Li; Jianfeng Gao; Yun-Nung Chen


arXiv: Computation and Language | 2017

Quasar: Datasets for Question Answering by Search and Reading.

Bhuwan Dhingra; Kathryn Mazaitis; William W. Cohen

Collaboration


Dive into the Bhuwan Dhingra's collaboration.

Top Co-Authors

Avatar

William W. Cohen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhilin Yang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kathryn Mazaitis

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanxiao Liu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge