Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Meilicke is active.

Publication


Featured researches published by Christian Meilicke.


Journal on Data Semantics | 2011

Ontology alignment evaluation initiative: six years of experience

Jérôme Euzenat; Christian Meilicke; Heiner Stuckenschmidt; Pavel Shvaiko; Cássia Trojahn

In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.


Journal of Logic and Computation | 2009

Reasoning Support for Mapping Revision

Christian Meilicke; Heiner Stuckenschmidt; Andrei Tamilin

Finding correct semantic correspondences between heterogeneous ontologies is one of the most challenging problems in the area of semantic web technologies. As manually constructing such mappings is not feasible in realistic scenarios, a number of automatic matching tools have been developed that propose mappings based on general heuristics. As these heuristics often produce incorrect results, a manual revision is inevitable in order to guarantee the quality of generated mappings. Experiences with benchmarking matching systems revealed that the manual revision of mappings is still a very difficult problem because it has to take the semantics of the ontologies as well as interactions between mappings into account. In this article, we propose methods for supporting human experts in the task of revising automatically created mappings. In particular, we present non-standard reasoning methods for detecting and propagating implications of expert decisions on the correctness of a mapping.


international semantic web conference | 2010

Leveraging terminological structure for object reconciliation

Jan Noessner; Mathias Niepert; Christian Meilicke; Heiner Stuckenschmidt

It has been argued that linked open data is the major benefit of semantic technologies for the web as it provides a huge amount of structured data that can be accessed in a more effective way than web pages. While linked open data avoids many problems connected with the use of expressive ontologies such as the knowledge acquisition bottleneck, data heterogeneity remains a challenging problem. In particular, identical objects may be referred to by different URIs in different data sets. Identifying such representations of the same object is called object reconciliation. In this paper, we propose a novel approach to object reconciliation that is based on an existing semantic similarity measure for linked data. We adapt the measure to the object reconciliation problem, present exact and approximate algorithms that efficiently implement the methods, and provide a systematic experimental evaluation based on a benchmark dataset. As our main result, we show that the use of light-weight ontologies and schema information significantly improves object reconciliation in the context of linked open data.


european semantic web conference | 2009

Improving Ontology Matching Using Meta-level Learning

Kai Eckert; Christian Meilicke; Heiner Stuckenschmidt

Despite serious research efforts, automatic ontology matching still suffers from severe problems with respect to the quality of matching results. Existing matching systems trade-off precision and recall and have their specific strengths and weaknesses. This leads to problems when the right matcher for a given task has to be selected. In this paper, we present a method for improving matching results by not choosing a specific matcher but applying machine learning techniques on an ensemble of matchers. Hereby we learn rules for the correctness of a correspondence based on the output of different matchers and additional information about the nature of the elements to be matched, thus leveraging the weaknesses of an individual matcher. We show that our method always performs significantly better than the median of the matchers used and in most cases outperforms the best matcher with an optimal threshold for a given pair of ontologies. As a side product of our experiments, we discovered that the majority vote is a simple but powerful heuristic for combining matchers that almost reaches the quality of our learning results.


knowledge acquisition, modeling and management | 2008

Learning Disjointness for Debugging Mappings between Lightweight Ontologies

Christian Meilicke; Johanna Völker; Heiner Stuckenschmidt

Dealing with heterogeneous ontologies by means of semantic mappings has become an important area of research and a number of systems for discovering mappings between ontologies have been developed. Most of these systems rely on general heuristics for finding mappings, hence are bound to fail in many situations. Consequently, automatically generated mappings often contain logical inconsistencies that hinder a sensible use of these mappings. In previous work, we presented an approach for debugging mappings between expressive ontologies that eliminates inconsistencies by means of diagnostic reasoning. A shortcoming of this method was its need for expressive class definitions. More specifically, the applicability of this method critically relies on the existence of a high-quality disjointness axiomatization. This paper deals with the application of the debugging approach to mappings between lightweight ontologies that do not contain any or very few disjointness axioms, as it is the case for most of todays practical ontologies. After discussing different approaches to deal with the absence of disjointness axioms we propose the application of supervised machine learning for detecting disjointness in a fully automatic manner. We present a detailed evaluation of our approach to learning disjointness and its impact on mapping debugging. The results show that debugging automatically created mappings with the help of learned disjointness axioms significantly improves the overall quality of these mappings.


Journal of Web Semantics | 2012

MultiFarm: A benchmark for multilingual ontology matching

Christian Meilicke; Raúl García-Castro; Fred Freitas; Willem Robert van Hage; Elena Montiel-Ponsoda; Ryan Ribeiro de Azevedo; Heiner Stuckenschmidt; Ondřej Šváb-Zamazal; Vojtěch Svátek; Andrei Tamilin; Cássia Trojahn; Shenghui Shenghui Wang

In this paper we present the MultiFarm dataset, which has been designed as a benchmark for multilingual ontology matching. The MultiFarm dataset is composed of a set of ontologies translated in different languages and the corresponding alignments between these ontologies. It is based on the OntoFarm dataset, which has been used successfully for several years in the Ontology Alignment Evaluation Initiative (OAEI). By translating the ontologies of the OntoFarm dataset into eight different languages-Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish-we created a comprehensive set of realistic test cases. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism.


KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007

Applying Logical Constraints to Ontology Matching

Christian Meilicke; Heiner Stuckenschmidt

Automatically discovering semantic relations between ontologies is an important task with respect to overcoming semantic heterogeneity on the semantic web. Ontology matching systems, however, often produce erroneous mappings. In this paper we propose a method for optimizing precision and recall of existing matching systems. The principle of this method is based on the idea that it is possible to infer logical constraints by comparing subsumption relations between concepts of the ontologies to be matched. In order to verify this principle we implemented a system that uses our method as basis for optimizing mappings. We generated a set of synthetic ontologies and corresponding defective mappings and studied the behavior of our method with respect to the properties of the matching problem. The results show that our strategy actually improves the quality of the generated mappings.


web reasoning and rule systems | 2009

An Efficient Method for Computing Alignment Diagnoses

Christian Meilicke; Heiner Stuckenschmidt

Formal, logic-based semantics have long been neglected in ontology matching. As a result, almost all matching systems produce incoherent alignments of ontologies. In this paper we propose a new method for repairing such incoherent alignments that extends previous work on this subject. We describe our approach within the theory of diagnosis and introduce the notion of a local optimal diagnosis. We argue that computing a local optimal diagnosis is a reasonable choice for resolving alignment incoherence and suggest an efficient algorithm. This algorithm partially exploits incomplete reasoning techniques to increase runtime performance. Nevertheless, the completeness and optimality of the solution is still preserved. Finally, we test our approach in an experimental study and discuss results with respect to runtime and diagnostic quality.


european semantic web conference | 2014

A Probabilistic Approach for Integrating Heterogeneous Knowledge Sources

Arnab Dutta; Christian Meilicke; Simone Paolo Ponzetto

Open Information Extraction (OIE) systems like Nell and ReVerb have achieved impressive results by harvesting massive amounts of machine-readable knowledge with minimal supervision. However, the knowledge bases they produce still lack a clean, explicit semantic data model. This, on the other hand, could be provided by full-fledged semantic networks like DBpedia or Yago, which, in turn, could benefit from the additional coverage provided by Web-scale IE. In this paper, we bring these two strains of research together, and present a method to align terms from Nell with instances in DBpedia. Our approach is unsupervised in nature and relies on two key components. First, we automatically acquire probabilistic type information for Nell terms given a set of matching hypotheses. Second, we view the mapping task as the statistical inference problem of finding the most likely coherent mapping – i.e., the maximum a posteriori (MAP) mapping – based on the outcome of the first component used as soft constraint. These two steps are highly intertwined: accordingly, we propose an approach that iteratively refines type acquisition based on the output of the mapping generator, and vice versa. Experimental results on gold-standard data indicate that our approach outperforms a strong baseline, and is able to produce ever-improving mappings consistently across iterations.


international world wide web conferences | 2015

Enriching Structured Knowledge with Open Information

Arnab Dutta; Christian Meilicke; Heiner Stuckenschmidt

We propose an approach for semantifying web extracted facts. In particular, we map subject and object terms of these facts to instances; and relational phrases to object properties defined in a target knowledge base. By doing this we resolve the ambiguity inherent in the web extracted facts, while simultaneously enriching the target knowledge base with a significant number of new assertions. In this paper, we focus on the mapping of the relational phrases in the context of the overall work ow. Furthermore, in an open extraction setting identical semantic relationships can be represented by different surface forms, making it necessary to group these surface forms together. To solve this problem we propose the use of markov clustering. In this work we present a complete, ontology independent, generalized workflow which we evaluate on facts extracted by Nell and Reverb. Our target knowledge base is DBpedia. Our evaluation shows promising results in terms of producing highly precise facts. Moreover, the results indicate that the clustering of relational phrases pays of in terms of an improved instance and property mapping.

Collaboration


Dive into the Christian Meilicke's collaboration.

Top Co-Authors

Avatar

Heiner Stuckenschmidt

Free University of Bozen-Bolzano

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elena Kuss

University of Mannheim

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge