Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James W. Murdock is active.

Publication


Featured researches published by James W. Murdock.


Ibm Journal of Research and Development | 2012

Deep parsing in Watson

Michael C. McCord; James W. Murdock; Branimir Boguraev

Two deep parsing components, an English Slot Grammar (ESG) parser and a predicate-argument structure (PAS) builder, provide core linguistic analyses of both the questions and the text content used by IBM Watson™ to find and hypothesize answers. Specifically, these components are fundamental in question analysis, candidate generation, and analysis of passage evidence. As part of the Watson project, ESG was enhanced, and its performance on Jeopardy!™ questions and on established reference data was improved. PAS was built on top of ESG to support higher-level analytics. In this paper, we describe these components and illustrate how they are used in a pattern-based relation extraction component of Watson. We also provide quantitative results of evaluating the component-level performance of ESG parsing.


Ibm Journal of Research and Development | 2012

A framework for merging and ranking of answers in DeepQA

David Gondek; Adam Lally; Aditya Kalyanpur; James W. Murdock; P. A. Duboue; Lixin Zhang; Yue Pan; Z. M. Qiu; Chris Welty

The final stage in the IBM DeepQA pipeline involves ranking all candidate answers according to their evidence scores and judging the likelihood that each candidate answer is correct. In DeepQA, this is done using a machine learning framework that is phase-based, providing capabilities for manipulating the data and applying machine learning in successive applications. We show how this design can be used to implement solutions to particular challenges that arise in applying machine learning for evidence-based hypothesis evaluation. Our approach facilitates an agile development environment for DeepQA; evidence scoring strategies can be easily introduced, revised, and reconfigured without the need for error-prone manual effort to determine how to combine the various evidence scores. We describe the framework, explain the challenges, and evaluate the gain over a baseline machine learning approach.


Ibm Journal of Research and Development | 2012

Structured data and inference in DeepQA

Aditya Kalyanpur; Branimir Boguraev; Siddharth Patwardhan; James W. Murdock; Adam Lally; Chris Welty; John M. Prager; B. Coppola; Achille B. Fokoue-Nkoutche; Lixin Zhang; Yue Pan; Z. M. Qiu

Although the majority of evidence analysis in DeepQA is focused on unstructured information (e.g., natural-language documents), several components in the DeepQA system use structured data (e.g., databases, knowledge bases, and ontologies) to generate potential candidate answers or find additional evidence. Structured data analytics are a natural complement to unstructured methods in that they typically cover a narrower range of questions but are more precise within that range. Moreover, structured data that has formal semantics is amenable to logical reasoning techniques that can be used to provide implicit evidence. The DeepQA system does not contain a single monolithic structured data module; instead, it allows for different components to use and integrate structured and semistructured data, with varying degrees of expressivity and formal specificity. This paper is a survey of DeepQA components that use structured data. Areas in which evidence from structured sources has the most impact include typing of answers, application of geospatial and temporal constraints, and the use of formally encoded a priori knowledge of commonly appearing entity types such as countries and U.S. presidents. We present details of appropriate components and demonstrate their end-to-end impact on the IBM Watsoni system.


Ibm Journal of Research and Development | 2012

Typing candidate answers using type coercion

James W. Murdock; Aditya Kalyanpur; Chris Welty; James Fan; David A. Ferrucci; David Gondek; Lixin Zhang; H. Kanayama

Many questions explicitly indicate the type of answer required. One popular approach to answering those questions is to develop recognizers to identify instances of common answer types (e.g., countries, animals, and food) and consider only answers on those lists. Such a strategy is poorly suited to answering questions from the Jeopardy!™ television quiz show. Jeopardy! questions have an extremely broad range of types of answers, and the most frequently occurring types cover only a small fraction of all answers. We present an alternative approach to dealing with answer types. We generate candidate answers without regard to type, and for each candidate, we employ a variety of sources and strategies to judge whether the candidate has the desired type. These sources and strategies provide a set of type coercion scores for each candidate answer. We use these scores to give preference to answers with more evidence of having the right type. Our question-answering system is significantly more accurate with type coercion than it is without type coercion; these components have a combined impact of nearly 5% on the accuracy of the IBM Watson™ question-answering system.


Ibm Journal of Research and Development | 2012

Textual evidence gathering and analysis

James W. Murdock; James Fan; Adam Lally; Hideki Shima; Branimir Boguraev

One useful source of evidence for evaluating a candidate answer to a question is a passage that contains the candidate answer and is relevant to the question. In the DeepQA pipeline, we retrieve passages using a novel technique that we call Supporting Evidence Retrieval, in which we perform separate search queries for each candidate answer, in parallel, and include the candidate answer as part of the query. We then score these passages using an assortment of algorithms that use different aspects and relationships of the terms in the question and passage. We provide evidence that our mechanisms for obtaining and scoring passages have a substantial impact on the ability of our question-answering system to answer questions and judge the confidence of the answers.


Archive | 2010

QUESTIONS AND ANSWERS GENERATION

Pablo Ariel Duboue; David A. Ferrucci; David Gondek; James W. Murdock; Wlodek Zadrozny


Archive | 2011

PROVIDING ANSWERS TO QUESTIONS USING MULTIPLE MODELS TO SCORE CANDIDATE ANSWERS

Eric W. Brown; David A. Ferrucci; James W. Murdock


Archive | 2011

Providing answers to questions using logical synthesis of candidate answers

Eric W. Brown; Jennifer Chu-Carroll; David A. Ferrucci; Adam Lally; James W. Murdock; John M. Prager


Archive | 2011

PROVIDING ANSWERS TO QUESTIONS USING HYPOTHESIS PRUNING

Jennifer Chu-Carroll; David A. Ferrucci; David Gondek; Adam Lally; James W. Murdock


Archive | 2012

Utilizing failures in question and answer system responses to enhance the accuracy of question and answer systems

Michael A. Barborak; Jennifer Chu-Carroll; David A. Ferrucci; James W. Murdock; Wlodek Zadrozny

Collaboration


Dive into the James W. Murdock's collaboration.

Top Co-Authors

Avatar

James Fan

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Wlodek Zadrozny

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge