Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jesse Dodge is active.

Publication


Featured researches published by Jesse Dodge.


empirical methods in natural language processing | 2016

Key-Value Memory Networks for Directly Reading Documents

Alexander H. Miller; Adam Fisch; Jesse Dodge; Amir-Hossein Karimi; Antoine Bordes; Jason Weston

Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.


computer vision and pattern recognition | 2012

Understanding and predicting importance in images

Alexander C. Berg; Tamara L. Berg; Hal Daumé; Jesse Dodge; Amit Goyal; Xufeng Han; Alyssa Mensch; Margaret Mitchell; Aneesh Sood; Karl Stratos; Kota Yamaguchi

What do people care about in an image? To drive computational visual recognition toward more human-centric outputs, we need a better understanding of how people perceive and judge the importance of content in images. In this paper, we explore how a number of factors relate to human perception of importance. Proposed factors fall into 3 broad types: 1) factors related to composition, e.g. size, location, 2) factors related to semantics, e.g. category of object or scene, and 3) contextual factors related to the likelihood of attribute-object, or object-scene pairs. We explore these factors using what people describe as a proxy for importance. Finally, we build models to predict what will be described about an image given either known image content, or image content estimated automatically by recognition systems.


meeting of the association for computational linguistics | 2014

Context-dependent Semantic Parsing for Time Expressions

Kenton Lee; Yoav Artzi; Jesse Dodge; Luke Zettlemoyer

We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments on benchmark datasets show that our approach outperforms previous stateof-the-art systems, with error reductions of 13% to 21% in end-to-end performance.


International Journal of Computer Vision | 2016

Large Scale Retrieval and Generation of Image Descriptions

Vicente Ordonez; Xufeng Han; Polina Kuznetsova; Girish Kulkarni; Margaret Mitchell; Kota Yamaguchi; Karl Stratos; Amit Goyal; Jesse Dodge; Alyssa Mensch; Hal Daumé; Alexander C. Berg; Yejin Choi; Tamara L. Berg

What is the story of an image? What is the relationship between pictures, language, and information we can extract using state of the art computational recognition systems? In an attempt to address both of these questions, we explore methods for retrieving and generating natural language descriptions for images. Ideally, we would like our generated textual descriptions (captions) to both sound like a person wrote them, and also remain true to the image content. To do this we develop data-driven approaches for image description generation, using retrieval-based techniques to gather either: (a) whole captions associated with a visually similar image, or (b) relevant bits of text (phrases) from a large collection of image + description pairs. In the case of (b), we develop optimization algorithms to merge the retrieved phrases into valid natural language sentences. The end result is two simple, but effective, methods for harnessing the power of big data to produce image captions that are altogether more general, relevant, and human-like than previous attempts.


international conference on computational linguistics | 2014

CMU: Arc-Factored, Discriminative Semantic Dependency Parsing

Sam Thomson; Brendan O'Connor; Jeffrey Flanigan; David Bamman; Jesse Dodge; Swabha Swayamdipta; Nathan Schneider; Chris Dyer; Noah A. Smith

We present an arc-factored statistical model for semantic dependency parsing, as defined by the SemEval 2014 Shared Task 8 on Broad-Coverage Semantic Dependency Parsing. Our entry in the open track placed second in the competition.


conference of the european chapter of the association for computational linguistics | 2012

Midge: Generating Image Descriptions From Computer Vision Detections

Margaret Mitchell; Jesse Dodge; Amit Goyal; Kota Yamaguchi; Karl Stratos; Xufeng Han; Alyssa Mensch; Alexander C. Berg; Tamara L. Berg; Hal Daumé


north american chapter of the association for computational linguistics | 2015

Retrofitting Word Vectors to Semantic Lexicons

Manaal Faruqui; Jesse Dodge; Sujay Kumar Jauhar; Chris Dyer; Eduard H. Hovy; Noah A. Smith


international conference on learning representations | 2016

Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems

Jesse Dodge; Andreea Gane; Xiang Zhang; Antoine Bordes; Sumit Chopra; Alexander H. Miller; Arthur Szlam; Jason Weston


arXiv: Machine Learning | 2018

Open Loop Hyperparameter Optimization and Determinantal Point Processes

Jesse Dodge; Kevin G. Jamieson; Noah A. Smith


arXiv: Machine Learning | 2017

Random Search for Hyperparameters using Determinantal Point Processes.

Jesse Dodge; Catrìona Anderson; Noah A. Smith

Collaboration


Dive into the Jesse Dodge's collaboration.

Top Co-Authors

Avatar

Noah A. Smith

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Alexander C. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Alyssa Mensch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamara L. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Xufeng Han

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge