Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abdalghani Abujabal is active.

Publication


Featured researches published by Abdalghani Abujabal.


international world wide web conferences | 2017

Automated Template Generation for Question Answering over Knowledge Graphs

Abdalghani Abujabal; Mohamed Yahya; Mirek Riedewald; Gerhard Weikum

Templates are an important asset for question answering over knowledge graphs, simplifying the semantic parsing of input utterances and generating structured queries for interpretable answers. State-of-the-art methods rely on hand-crafted templates with limited coverage. This paper presents QUINT, a system that automatically learns utterance-query templates solely from user questions paired with their answers. Additionally, QUINT is able to harness language compositionality for answering complex questions without having any templates for the entire question. Experiments with different benchmarks demonstrate the high quality of QUINT.


empirical methods in natural language processing | 2015

FINET: Context-Aware Fine-Grained Named Entity Typing

Luciano Del Corro; Abdalghani Abujabal; Rainer Gemulla; Gerhard Weikum

We propose FINET, a system for detecting the types of named entities in short inputs—such as sentences or tweets—with respect to WordNet’s super fine-grained type system. FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation. FINET combats data scarcity and noise from existing systems: It does not rely on supervision in its extractors and generates training data for type selection from WordNet and other resources. FINET supports the most fine-grained type system so far, including types with no annotated training data. Our experiments indicate that FINET outperforms state-of-the-art methods in terms of recall, precision, and granularity of extracted types.


international world wide web conferences | 2015

Important Events in the Past, Present, and Future

Abdalghani Abujabal; Klaus Berberich

We address the problem of identifying important events in the past, present, and future from semantically-annotated large-scale document collections. Semantic annotations that we consider are named entities (e.g., persons, locations, organizations) and temporal expressions (e.g., during the 1990s). More specifically, for a given time period of interest, our objective is to identify, rank, and describe important events that happened. Our approach P2F Miner makes use of frequent itemset mining to identify events and group sentences related to them. It uses an information-theoretic measure to rank identified events. For each of them, it selects a representative sentence as a description. Experiments on ClueWeb09 using events listed in Wikipedia year articles as ground truth show that our approach is effective and outperforms a baseline based on statistical language models.


international world wide web conferences | 2018

Never-Ending Learning for Open-Domain Question Answering over Knowledge Bases

Abdalghani Abujabal; Rishiraj Saha Roy; Mohamed Yahya; Gerhard Weikum

Translating natural language questions to semantic representations such as SPARQL is a core challenge in open-domain question answering over knowledge bases (KB-QA). Existing methods rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Two major shortcomings of such methods are that (i) they require access to a large annotated training set that is not always readily available and (ii) they fail on questions from before-unseen domains. To overcome these limitations, this paper presents NEQA, a continuous learning paradigm for KB-QA. Offline, NEQA automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient. Using a semantic similarity function between questions and by judicious invocation of non-expert user feedback, NEQA learns new templates that capture previously-unseen syntactic structures. This way, NEQA gradually extends its template repository. NEQA periodically re-trains its underlying models, allowing it to adapt to the language used after deployment. Our experiments demonstrate NEQAs viability, with steady improvement in answering quality over time, and the ability to answer questions from new domains.


WWW '18 Companion Proceedings of the The Web Conference 2018 | 2018

TempQuestions: A Benchmark for Temporal Question Answering

Zhen Jia; Abdalghani Abujabal; Rishiraj Saha Roy; Jannik Strötgen; Gerhard Weikum

Answering complex questions is one of the challenges that question-answering (QA) systems face today. While complexity has several facets, question dimensions like temporal and spatial intents necessitate specialized treatment. Methods geared towards such questions need benchmarks that reflect the desired aspects and challenges. Here, we take a key step in this direction, and release a new benchmark, TempQuestions, containing 1,271 questions, that are all temporal in nature, paired with their answers. As a key contribution that enabled the creation of this resource, we provide a crisp definition for temporal questions. Most questions require decomposing them into sub-questions, and the questions are of a kind that they would be best evaluated on a combination of structured data and unstructured text sources. Experiments with two QA systems demonstrate the need for further research on complex questions.


conference on information and knowledge management | 2018

TEQUILA: Temporal Question Answering over Knowledge Bases

Zhen Jia; Abdalghani Abujabal; Rishiraj Saha Roy; Jannik Strötgen; Gerhard Weikum

Question answering over knowledge bases (KB-QA) poses challenges in handling complex questions that need to be decomposed into sub-questions. An important case, addressed here, is that of temporal questions, where cues for temporal relations need to be discovered and handled. We present TEQUILA, an enabler method for temporal QA that can run on top of any KB-QA engine. TEQUILA has four stages. It detects if a question has temporal intent. It decomposes and rewrites the question into non-temporal sub-questions and temporal constraints. Answers to sub-questions are then retrieved from the underlying KB-QA engine. Finally, TEQUILA uses constraint reasoning on temporal intervals to compute final answers to the full question. Comparisons against state-of-the-art baselines show the viability of our method.


arXiv: Computation and Language | 2018

Neural Named Entity Recognition from Subword Units.

Abdalghani Abujabal; Judith Gaspers


arXiv: Computation and Language | 2018

ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters.

Abdalghani Abujabal; Rishiraj Saha Roy; Mohamed Yahya; Gerhard Weikum


very large data bases | 2017

Query-Driven On-The-Fly Knowledge Base Construction

Dat Ba Nguyen; Abdalghani Abujabal; Nam Khanh Tran; Martin Theobald; Gerhard Weikum


international joint conference on natural language processing | 2017

Efficiency-aware Answering of Compositional Questions using Answer Type Prediction

David Ziegler; Abdalghani Abujabal; Rishiraj Saha Roy; Gerhard Weikum

Collaboration


Dive into the Abdalghani Abujabal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Jia

Southwest Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge