Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siva Reddy is active.

Publication


Featured researches published by Siva Reddy.


meeting of the association for computational linguistics | 2016

Question Answering on Freebase via Relation Extraction and Textual Evidence

Kun Xu; Siva Reddy; Yansong Feng; Songfang Huang; Dongyan Zhao

Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.


meeting of the association for computational linguistics | 2017

Learning Structured Natural Language Representations for Semantic Parsing

Jianpeng Cheng; Siva Reddy; Vijay A. Saraswat; Mirella Lapata

We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We obtain competitive results on various datasets. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.


international conference on natural language generation | 2016

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

Shashi Narayan; Siva Reddy; Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer. In this paper we propose to bridge this gap by generating paraphrases of the input question with the goal that at least one of them will be correctly mapped to a knowledge-base query. We introduce a novel grammar model for paraphrase generation that does not require any sentence-aligned paraphrase corpus. Our key idea is to leverage the flexibility and scalability of latent-variable probabilistic context-free grammars to sample paraphrases. We do an extrinsic evaluation of our paraphrases by plugging them into a semantic parser for Freebase. Our evaluation experiments on the WebQuestions benchmark dataset show that the performance of the semantic parser significantly improves over strong baselines.


meeting of the association for computational linguistics | 2017

Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks.

Rajarshi Das; Manzil Zaheer; Siva Reddy; Andrew McCallum

Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. {\it Universal schema} can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing \emph{memory networks} to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on \spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5


north american chapter of the association for computational linguistics | 2016

Assessing Relative Sentence Complexity using an Incremental CCG Parser

Bharat Ram Ambati; Siva Reddy; Mark Steedman

F_1


empirical methods in natural language processing | 2016

Evaluating Induced CCG Parsers on Grounded Semantic Parsing

Yonatan Bisk; Siva Reddy; John Blitzer; Julia Hockenmaier; Mark Steedman

points.\footnote{Code and data available in \url{this https URL}}


Transactions of the Association for Computational Linguistics | 2014

Large-scale Semantic Parsing without Question-Answer Pairs

Siva Reddy; Mirella Lapata; Mark Steedman

Given a pair of sentences, we present computational models to assess if one sentence is simpler to read than the other. While existing models explored the usage of phrase structure features using a non-incremental parser, experimental evidence suggests that the human language processor works incrementally. We empirically evaluate if syntactic features from an incremental CCG parser are more useful than features from a non-incremental phrase structure parser. Our evaluation on Simple and Standard Wikipedia sentence pairs suggests that incremental CCG features are indeed more useful than phrase structure features achieving 0.44 points gain in performance. Incremental CCG parser also gives significant improvements in speed (12 times faster) in comparison to the phrase structure parser. Furthermore, with the addition of psycholinguistic features, we achieve the strongest result to date reported on this task. Our code and data can be downloaded from https://github. com/bharatambati/sent-compl.


Transactions of the Association for Computational Linguistics | 2016

Transforming Dependency Structures to Logical Forms for Semantic Parsing

Siva Reddy; Oscar Täckström; Michael Collins; Tom Kwiatkowski; Dipanjan Das; Mark Steedman; Mirella Lapata

We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis. This extrinsic, task-based evaluation provides a unique window to explore the strengths and weaknesses of semantics captured by unsupervised grammar induction systems. We release a new Freebase semantic parsing dataset called SPADES (Semantic PArsing of DEclarative Sentences) containing 93K cloze-style questions paired with answers. We evaluate all our models on this dataset. Our code and data are available at this https URL


empirical methods in natural language processing | 2017

Learning to Paraphrase for Question Answering

Li Dong; Jonathan Mallinson; Siva Reddy; Mirella Lapata


language resources and evaluation | 2012

Word Sketches for Turkish

Bharat Ram Ambati; Siva Reddy; Adam Kilgarriff

Collaboration


Dive into the Siva Reddy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhilash Inumella

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcin Junczys-Dowmunt

Adam Mickiewicz University in Poznań

View shared research outputs
Researchain Logo
Decentralizing Knowledge