Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dipanjan Das is active.

Publication


Featured researches published by Dipanjan Das.


Computational Linguistics | 2014

Frame-semantic parsing

Dipanjan Das; Desai Chen; André F. T. Martins; Nathan Schneider; Noah A. Smith

Frame semantics is a linguistic theory that has been instantiated for English in the FrameNet lexicon. We solve the problem of frame-semantic parsing using a two-stage statistical model that takes lexical targets (i.e., content words and phrases) in their sentential contexts and predicts frame-semantic structures. Given a target in context, the first stage disambiguates it to a semantic frame. This model uses latent variables and semi-supervised learning to improve frame disambiguation for targets unseen at training time. The second stage finds the targets locally expressed semantic arguments. At inference time, a fast exact dual decomposition algorithm collectively predicts all the arguments of a frame at once in order to respect declaratively stated linguistic constraints, resulting in qualitatively better structures than naïve local predictors. Both components are feature-based and discriminatively trained on a small set of annotated frame-semantic parses. On the SemEval 2007 benchmark data set, the approach, along with a heuristic identifier of frame-evoking targets, outperforms the prior state of the art by significant margins. Additionally, we present experiments on the much larger FrameNet 1.5 data set. We have released our frame-semantic parser as open-source software.


empirical methods in natural language processing | 2016

A Decomposable Attention Model for Natural Language Inference

Ankur P. Parikh; Oscar Täckström; Dipanjan Das; Jakob Uszkoreit

We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.


empirical methods in natural language processing | 2008

Stacking Dependency Parsers

André F. T. Martins; Dipanjan Das; Noah A. Smith; Eric P. Xing

We explore a stacked framework for learning to predict dependency structures for natural language sentences. A typical approach in graph-based dependency parsing has been to assume a factorized model, where local features are used but a global function is optimized (McDonald et al., 2005b). Recently Nivre and McDonald (2008) used the output of one dependency parser to provide features for another. We show that this is an example of stacked learning, in which a second predictor is trained to improve the performance of the first. Further, we argue that this technique is a novel way of approximating rich non-local features in the second parser, without sacrificing efficient, model-optimal prediction. Experiments on twelve languages show that stacking transition-based and graph-based parsers improves performance over existing state-of-the-art dependency parsers.


meeting of the association for computational linguistics | 2014

Semantic Frame Identification with Distributed Word Representations

Karl Moritz Hermann; Dipanjan Das; Jason Weston; Kuzman Ganchev

We present a novel technique for semantic frame identification using distributed representations of predicates and their syntactic context; this technique leverages automatic syntactic parses and a generic set of word embeddings. Given labeled data annotated with frame-semantic parses, we learn a model that projects the set of word representations for the syntactic context around a predicate to a low dimensional representation. The latter is used for semantic frame identification; with a standard argument identification method inspired by prior work, we achieve state-ofthe-art results on FrameNet-style framesemantic analysis. Additionally, we report strong results on PropBank-style semantic role labeling in comparison to prior work.


empirical methods in natural language processing | 2014

Learning Compact Lexicons for CCG Semantic Parsing

Yoav Artzi; Dipanjan Das; Slav Petrov

We present methods to control the lexicon size when learning a Combinatory Categorial Grammar semantic parser. Existing methods incrementally expand the lexicon by greedily adding entries, considering a single training datapoint at a time. We propose using corpus-level statistics for lexicon learning decisions. We introduce voting to globally consider adding entries to the lexicon, and pruning to remove entries no longer required to explain the training data. Our methods result in state-of-the-art performance on the task of executing sequences of natural language instructions, achieving up to 25% error reduction, with lexicons that are up to 70% smaller and are qualitatively less noisy.


empirical methods in natural language processing | 2015

Semantic Role Labeling with Neural Network Factors

Nicholas FitzGerald; Oscar Täckström; Kuzman Ganchev; Dipanjan Das

We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.


intelligent user interfaces | 2007

Combating information overload in non-visual web access using context

Jalal Mahmud; Yevgen Borodin; Dipanjan Das; I. V. Ramakrishnan

Web sites are designed for graphical mode of interaction. Sighted users can visually segment Web pages and quickly identify relevant information. In contrast, visually-disabled individuals have to use screen readers to browse the Web. Screen readers process pages sequentially and read through everything, making Web browsing time-consuming and strenuous. The use of shortcut keys and searching offers some improvements, but the problem still remains. In this paper, we address this problem using the notion of context. When a user follows a link, we capture the context of the link, and use it to identify relevant information on the next page. The content of this page is rearranged, so that the relevant information is read out first. We conducted a series experiments to compare the performance of our prototype system with the state-of-the-art JAWS screen reader. Our results show that the use of context can potentially save browsing time as well as improve browsing experience of visually disabled individuals.


Proceedings of the 2009 Workshop on Language Generation and Summarisation (UCNLG+Sum 2009) | 2009

Non-textual Event Summarization by Applying Machine Learning to Template-based Language Generation

Mohit Kumar; Dipanjan Das; Sachin Agarwal; Alexander I. Rudnicky

We describe a learning-based system that creates draft reports based on observation of people preparing such reports in a target domain (conference replanning). The reports (or briefings) are based on a mix of text and event data. The latter consist of task creation and completion actions, collected from a wide variety of sources within the target environment. The report drafting system is part of a larger learning-based cognitive assistant system that improves the quality of its assistance based on an opportunity to learn from observation. The system can learn to accurately predict the briefing assembly behavior and shows significant performance improvements relative to a non-learning system, demonstrating that its possible to create meaningful verbal descriptions of activity from event streams.


electronic imaging | 2008

Improving multimedia retrieval with a video OCR

Dipanjan Das; Datong Chen; Alexander G. Hauptmann

We present a set of experiments with a video OCR system (VOCR) tailored for video information retrieval and establish its importance in multimedia search in general and for some specific queries in particular. The system, inspired by an existing work on text detection and recognition in images, has been developed using techniques involving detailed analysis of video frames producing candidate text regions. The text regions are then binarized and sent to a commercial OCR resulting in ASCII text, that is finally used to create search indexes. The system is evaluated using the TRECVID data. We compare the systems performance from an information retrieval perspective with another VOCR developed using multi-frame integration and empirically demonstrate that deep analysis on individual video frames result in better video retrieval. We also evaluate the effect of various textual sources on multimedia retrieval by combining the VOCR outputs with automatic speech recognition (ASR) transcripts. For general search queries, the VOCR system coupled with ASR sources outperforms the other system by a very large extent. For search queries that involve named entities, especially people names, the VOCR system even outperforms speech transcripts, demonstrating that source selection for particular query types is extremely essential.


meeting of the association for computational linguistics | 2014

Enhanced Search with Wildcards and Morphological Inflections in the Google Books Ngram Viewer

Jason Mann; David Zhang; Lu Yang; Dipanjan Das; Slav Petrov

We present a new version of the Google Books Ngram Viewer, which plots the frequency of words and phrases over the last five centuries; its data encompasses 6% of the world’s published books. The new Viewer adds three features for more powerful search: wildcards, morphological inflections, and capitalization. These additions allow the discovery of patterns that were previously difficult to find and further facilitate the study of linguistic trends in printed text.

Collaboration


Dive into the Dipanjan Das's collaboration.

Top Co-Authors

Avatar

Noah A. Smith

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Nathan Schneider

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Desai Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Oscar Täckström

Swedish Institute of Computer Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Gimpel

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge