Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pushpendre Rastogi is active.

Publication


Featured researches published by Pushpendre Rastogi.


north american chapter of the association for computational linguistics | 2015

Multiview LSA: Representation Learning via Generalized CCA

Pushpendre Rastogi; Benjamin Van Durme; Raman Arora

Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.


international joint conference on natural language processing | 2015

PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification

Ellie Pavlick; Pushpendre Rastogi; Juri Ganitkevitch; Benjamin Van Durme; Chris Callison-Burch

We present a new release of the Paraphrase Database. PPDB 2.0 includes a discriminatively re-ranked set of paraphrases that achieve a higher correlation with human judgments than PPDB 1.0’s heuristic rankings. Each paraphrase pair in the database now also includes finegrained entailment relations, word embedding similarities, and style annotations.


empirical methods in natural language processing | 2015

Script Induction as Language Modeling

Rachel Rudinger; Pushpendre Rastogi; Francis Ferraro; Benjamin Van Durme

The narrative cloze is an evaluation metric commonly used for work on automatic script induction. While prior work in this area has focused on count-based methods from distributional semantics, such as pointwise mutual information, we argue that the narrative cloze can be productively reframed as a language modeling task. By training a discriminative language model for this task, we attain improvements of up to 27 percent over prior methods on standard narrative cloze metrics.


international joint conference on natural language processing | 2015

FrameNet+: Fast Paraphrastic Tripling of FrameNet

Ellie Pavlick; Travis Wolfe; Pushpendre Rastogi; Chris Callison-Burch; Mark Dredze; Benjamin Van Durme

We increase the lexical coverage of FrameNet through automatic paraphrasing. We use crowdsourcing to manually filter out bad paraphrases in order to ensure a high-precision resource. Our expanded FrameNet contains an additional 22K lexical units, a 3-fold increase over the current FrameNet, and achieves 40% better coverage when evaluated in a practical setting on New York Times data.


workshop on events definition detection coreference and representation | 2014

Augmenting FrameNet Via PPDB

Pushpendre Rastogi; Benjamin Van Durme

FrameNet is a lexico-semantic dataset that embodies the theory of frame semantics. Like other semantic databases, FrameNet is incomplete. We augment it via the paraphrase database, PPDB, and gain a threefold increase in coverage at 65% precision.


north american chapter of the association for computational linguistics | 2016

Weighting Finite-State Transductions With Neural Context

Pushpendre Rastogi; Ryan Cotterell; Jason Eisner

How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-state transducer to score all possible output strings, but to augment the scoring function with the help of recurrent networks. A stack of bidirectional LSTMs reads the input string from leftto-right and right-to-left, in order to summarize the input context in which a transducer arc is applied. We combine these learned features with the transducer to define a probability distribution over aligned output strings, in the form of a weighted finite-state automaton. This reduces hand-engineering of features, allows learned features to examine unbounded context in the input string, and still permits exact inference through dynamic programming. We illustrate our method on the tasks of morphological reinflection and lemmatization.


conference on information sciences and systems | 2016

Efficient implementation of enhanced adaptive simultaneous perturbation algorithms

Pushpendre Rastogi; Jingyi Zhu; James C. Spall

Stochastic approximation (SA) applies in both the gradient-free optimization (Kiefer-Wolfowitz) and the gradient-based setting (Robbins-Monro). The idea of simultaneous perturbation (SP) has been well established. This paper discusses an efficient way of implementing both the adaptive Newton-like SP algorithms and their enhancements (feedback and optimal weighting incorporated), using the Woodbury matrix identity, a.k.a. matrix inversion lemma. Basically, instead of estimating the Hessian matrix directly, this paper deals with the estimation of the inverse of the Hessian matrix. Furthermore, the preconditioning steps, which are required in early iterations to maintain positive-definiteness of the Hessian estimates, are imposed on the Hessian inverse rather than the Hessian itself. Numerical results also demonstrate the superiority of this efficient implementation on Newton-like SP algorithms.


international acm sigir conference on research and development in information retrieval | 2018

Neural Variational Entity Set Expansion for Automatically Populated Knowledge Graphs.

Pushpendre Rastogi; Adam Poliak; Vince Lyzinski; Benjamin Van Durme

We propose Neural variational set expansion to extract actionable information from a noisy knowledge graph (KG) and propose a general approach for increasing the interpretability of recommendation systems. We demonstrate the usefulness of applying a variational autoencoder to the Entity set expansion task based on a realistic automatically generated KG.


international joint conference on natural language processing | 2017

Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework

Aaron Steven White; Pushpendre Rastogi; Kevin Duh; Benjamin Van Durme


international acm sigir conference on research and development in information retrieval | 2017

Training Relation Embeddings under Logical Constraints.

Pushpendre Rastogi; Adam Poliak; Benjamin Van Durme

Collaboration


Dive into the Pushpendre Rastogi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ellie Pavlick

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Craig Harman

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Mark Dredze

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Duh

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chandler May

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge