Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matt Gardner is active.

Publication


Featured researches published by Matt Gardner.


empirical methods in natural language processing | 2014

Incorporating Vector Space Similarity in Random Walk Inference over Knowledge Bases

Matt Gardner; Partha Pratim Talukdar; Jayant Krishnamurthy; Tom M. Mitchell

Much work in recent years has gone into the construction of large knowledge bases (KBs), such as Freebase, DBPedia, NELL, and YAGO. While these KBs are very large, they are still very incomplete, necessitating the use of inference to fill in gaps. Prior work has shown how to make use of a large text corpus to augment random walk inference over KBs. We present two improvements to the use of such large corpora to augment KB inference. First, we present a new technique for combining KB relations and surface text into a single graph representation that is much more compact than graphs used in prior work. Second, we describe how to incorporate vector space similarity into random walk inference over KBs, reducing the feature sparsity inherent in using surface text. This allows us to combine distributional similarity with symbolic logical inference in novel and effective ways. With experiments on many relations from two separate KBs, we show that our methods significantly outperform prior work on KB inference, both in the size of problem our methods can handle and in the quality of predictions made.


empirical methods in natural language processing | 2015

Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction

Matt Gardner; Tom M. Mitchell

We explore some of the practicalities of using random walk inference methods, such as the Path Ranking Algorithm (PRA), for the task of knowledge base completion. We show that the random walk probabilities computed (at great expense) by PRA provide no discernible benefit to performance on this task, so they can safely be dropped. This allows us to define a simpler algorithm for generating feature matrices from graphs, which we call subgraph feature extraction (SFE). In addition to being conceptually simpler than PRA, SFE is much more efficient, reducing computation by an order of magnitude, and more expressive, allowing for much richer features than paths between two nodes in a graph. We show experimentally that this technique gives substantially better performance than PRA and its variants, improving mean average precision from .432 to .528 on a knowledge base completion task using the NELL KB.


empirical methods in natural language processing | 2015

Translation Invariant Word Embeddings

Kejun Huang; Matt Gardner; Evangelos E. Papalexakis; Christos Faloutsos; Nikos D. Sidiropoulos; Tom M. Mitchell; Partha Pratim Talukdar; Xiao Fu

This work focuses on the task of finding latent vector representations of the words in a corpus. In particular, we address the issue of what to do when there are multiple languages in the corpus. Prior work has, among other techniques, used canonical correlation analysis to project pre-trained vectors in two languages into a common space. We propose a simple and scalable method that is inspired by the notion that the learned vector representations should be invariant to translation between languages. We show empirically that our method outperforms prior work on multilingual tasks, matches the performance of prior work on monolingual tasks, and scales linearly with the size of the input data (and thus the number of languages being embedded).


national conference on artificial intelligence | 2015

Never-ending learning

Tom M. Mitchell; William W. Cohen; E. Hruschka; Partha Pratim Talukdar; Justin Betteridge; Andrew Carlson; Bhavana Dalvi; Matt Gardner; Bryan Kisiel; Jayant Krishnamurthy; Ni Lao; Kathryn Mazaitis; T. Mohamed; Ndapandula Nakashole; Emmanouil Antonios Platanios; Alan Ritter; Mehdi Samadi; Burr Settles; Richard C. Wang; Derry Tanti Wijaya; Abhinav Gupta; Xi Chen; A. Saparov; M. Greaves; J. Welling


empirical methods in natural language processing | 2013

Improving Learning and Inference in a Large Knowledge-Base using Latent Syntactic Cues

Matt Gardner; Partha Pratim Talukdar; Bryan Kisiel; Tom M. Mitchell


meeting of the association for computational linguistics | 2018

Simple and Effective Multi-Paragraph Reading Comprehension

Christopher Clark; Matt Gardner


empirical methods in natural language processing | 2017

Neural Semantic Parsing with Type Constraints for Semi-Structured Tables.

Jayant Krishnamurthy; Pradeep Dasigi; Matt Gardner


arXiv: Computation and Language | 2018

AllenNLP: A Deep Semantic Natural Language Processing Platform

Matt Gardner; Joel Grus; Mark Neumann; Oyvind Tafjord; Pradeep Dasigi; Nelson F. Liu; Matthew E. Peters; Michael Schmitz; Luke Zettlemoyer


empirical methods in natural language processing | 2017

Crowdsourcing Multiple Choice Science Questions.

Johannes Welbl; Nelson F. Liu; Matt Gardner


Theory and Applications of Categories | 2013

CMUML System for KBP 2013 Slot Filling.

Bryan Kisiel; Justin Betteridge; Matt Gardner; Jayant Krishnamurthy; Ndapa Nakashole; Mehdi Samadi; Partha Pratim Talukdar; Derry Tanti Wijaya; Tom M. Mitchell

Collaboration


Dive into the Matt Gardner's collaboration.

Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan Kisiel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Betteridge

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehdi Samadi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge