Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Denis Savenkov is active.

Publication


Featured researches published by Denis Savenkov.


international acm sigir conference on research and development in information retrieval | 2016

When a Knowledge Base Is Not Enough: Question Answering over Knowledge Bases with External Text Data

Denis Savenkov; Eugene Agichtein

One of the major challenges for automated question answering over Knowledge Bases (KBQA) is translating a natural language question to the Knowledge Base (KB) entities and predicates. Previous systems have used a limited amount of training data to learn a lexicon that is later used for question answering. This approach does not make use of other potentially relevant text data, outside the KB, which could supplement the available information. We introduce a new system, Text2KB, that enriches question answering over a knowledge base by using external text data. Specifically, we revisit different phases in the KBQA process and demonstrate that text resources improve question interpretation, candidate generation and ranking. Building on a state-of-the-art traditional KBQA system, Text2KB utilizes web search results, community question answering and general text document collection data, to detect question topic entities, map question phrases to KB predicates, and to enrich the features of the candidates derived from the KB. Text2KB significantly improves performance over the baseline KBQA method, as measured on a popular WebQuestions dataset. The results and insights developed in this work can guide future efforts on combining textual and structured KB data for question answering.


north american chapter of the association for computational linguistics | 2015

Relation Extraction from Community Generated Question-Answer Pairs

Denis Savenkov; Wei-Lwun Lu; Jeffrey Dalton; Eugene Agichtein

Community question answering (CQA) websites contain millions of question and answer (QnA) pairs that represent real users’ interests. Traditional methods for relation extraction from natural language text operate over individual sentences. However answer text is sometimes hard to understand without knowing the question, e.g., it may not name the subject or relation of the question. This work presents a novel model for relation extraction from CQA data, which uses discourse of QnA pairs to predict relations between entities mentioned in question and answer sentences. Experiments on 2 publicly available datasets demonstrate that the model can extract from 20% to 40% additional relation triples, not extracted by existing sentence-based models.


conference on human information interaction and retrieval | 2017

What Do You Mean Exactly?: Analyzing Clarification Questions in CQA

Pavel Braslavski; Denis Savenkov; Eugene Agichtein; Alina Dubatovka

Search as a dialogue is an emerging paradigm that is fueled by the proliferation of mobile devices and technological advances, e.g. in speech recognition and natural language processing. Such an interface allows search systems to engage in a dialogue with users aimed at fulfilling their information needs. One key capability required to make such search dialogues effective is asking clarification questions (CLARQ) proactively, when a users intent is not clear, which could help the system provide more useful responses. With this in mind, we explore the dialogues between the users on a community question answering (CQA) website as a rich repository of information-seeking interactions. In particular, we study the clarification questions asked by CQA users in two different domains, analyze their behavior, and the types of clarification questions asked. Our results suggest that the types of CLARQ are very diverse, while the questions themselves tend to be specific and require both domain- and general knowledge. However, focusing on popular CLARQ types and domains can be fruitful. As a first step towards automatic generation of clarification questions, we explore the problem of predicting the specific subject of a clarification question. Our findings can be useful for future improvements of intelligent dialog search and question answering systems.


Proceedings of the Workshop on Human-Computer Question Answering | 2016

Crowdsourcing for (almost) Real-time Question Answering

Denis Savenkov; Scott Weitzner; Eugene Agichtein

Modern search engines have made dramatic progress in the answering of many user’s questions about facts, such as those that might be retrieved or directly inferred from a knowledge base. However, many other questions that real users ask are more complex, such as asking for opinions or advice for a particular situation, and are still largely beyond the competence of the computer systems. As conversational agents become more popular, QA systems are increasingly expected to handle such complex questions, and to do so in (nearly) real-time, as the searcher is unlikely to wait longer than a minute or two for an answer. One way to overcome some of the challenges in complex question answering is crowdsourcing. We explore two ways crowdsourcing can assist a question answering system that operates in (near) real time: by providing answer validation, which could be used to filter or re-rank the candidate answers, and by creating the answer candidates directly. Specifically, we focus on understanding the effects of time restrictions in the near real-time QA setting. Our experiments show that even within a one minute time limit, crowd workers can produce reliable ratings for up to three answer candidates, and generate answers that are better than an average automated system from the LiveQA 2015 shared task. Our findings can be useful for developing hybrid human-computer systems for automatic question answering and conversational agents.


international acm sigir conference on research and development in information retrieval | 2014

To hint or not: exploring the effectiveness of search hints for complex informational tasks

Denis Savenkov; Eugene Agichtein

Extensive previous research has shown that searchers often require assistance with query formulation and refinement. Yet, it is not clear what kind of assistance is most useful, and how effective it is both objectively (e.g., in terms of task success) and subjectively (e.g., in terms of searcher percep- tion of the search difficulty). This work describes the results of a controlled user study comparing the effects of provid- ing specific vs. generic search hints on search success and satisfaction. Our results indicate that specific search hints tend to effectively improve searcher success rates and reduce perceived effort, while generic ones can be detrimental in both search effectiveness and user satisfaction. The results of this study are an important step towards the design of future search systems that could effectively assist and guide the user in accomplishing complex search tasks.


meeting of the association for computational linguistics | 2017

EviNets: Neural Networks for Combining Evidence Signals for Factoid Question Answering.

Denis Savenkov; Eugene Agichtein

A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.


international acm sigir conference on research and development in information retrieval | 2013

Search engine switching detection based on user personal preferences and behavior patterns

Denis Savenkov; Dmitry Lagun; Qiaoling Liu


Lecture Notes in Computer Science | 2011

Search Snippet Evaluation at Yandex: Lessons Learned and Future Directions

Denis Savenkov; Pavel Braslavski; Mikhail V. Lebedev


international conference on weblogs and social media | 2013

Touch Screens for Touchy Issues: Analysis of Accessing Sensitive Information from Mobile Devices

Dan Pelleg; Denis Savenkov; Eugene Agichtein


Archive | 2012

Improving Relevance Prediction by Addressing Biases and Sparsity in Web Search Click Data

Qi Guo; Dmitry Lagun; Denis Savenkov; Qiaoling Liu

Collaboration


Dive into the Denis Savenkov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge