Gerhard Wohlgenannt
Vienna University of Economics and Business
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerhard Wohlgenannt.
data and knowledge engineering | 2010
Albert Weichselbraun; Gerhard Wohlgenannt; Arno Scharl
This paper presents a method to integrate external knowledge sources such as DBpedia and OpenCyc into an ontology learning system that automatically suggests labels for unknown relations in domain ontologies based on large corpora of unstructured text. The method extracts and aggregates verb vectors from semantic relations identified in the corpus. It composes a knowledge base which consists of (i) verb centroids for known relations between domain concepts, (ii) mappings between concept pairs and the types of known relations, and (iii) ontological knowledge retrieved from external sources. Applying semantic inference and validation to this knowledge base improves the quality of suggested relation labels. A formal evaluation compares the accuracy and average ranking precision of this hybrid method with the performance of methods that solely rely on corpus data and those that are only based on reasoning and external data sources.
International Journal of Metadata, Semantics and Ontologies | 2009
Albert Weichselbraun; Gerhard Wohlgenannt; Arno Scharl; Michael Granitzer; Thomas Neidhart; Andreas Juffinger
The identification and labelling of non-hierarchical relations are among the most challenging tasks in ontology learning. This paper describes a bottom-up approach for automatically suggesting ontology link types. The presented method extracts verb-vectors from semantic relations identified in the domain corpus, aggregates them by computing centroids for known relation types, and stores the centroids in a central knowledge base. Comparing verb-vectors extracted from unknown relations with the stored centroids yields link type suggestions. Domain experts evaluate these suggestions, refining the knowledge base and constantly improving the components accuracy. A final evaluation provides a detailed statistical analysis of the introduced approach.
international conference on digital information management | 2007
Andreas Juffinger; Thomas Neidhart; Albert Weichselbraun; Gerhard Wohlgenannt; Michael Granitzer; Roman Kern; Arno Scharl
Semantic Web technologies in general and ontologybased approaches in particular are considered the foundation for the next generation of information services. While ontologies enable software agents to exchange knowledge and information in a standardised, intelligent manner, describing todays vast amount of information in terms of ontological knowledge and to track the evolution of such ontologies remains a challenge. In this paper we describe Web2.0 crawling for ontology evolution. The World Wide Web, or Web for short, is due, its evolutionary properties and social network characteristics a perfect fitting data source to evolve an ontology. The decentralised structure of the Internet, the huge amount of data and upcoming Web2.0 technologies arise several challenges for a crawling system. In this paper we present a distributed crawling system with standard browser integration. The proposed system is a high performance, sitescript based noise reducing crawler, which loads standard browser equivalent content from Web2.0 resources. Furthermore we describe the integration of this spider into our ontology evolution framework.
database and expert systems applications | 2010
Albert Weichselbraun; Gerhard Wohlgenannt; Arno Scharl
Recent research shows the potential of utilizing data collected through Web 2.0 applications to capture changes in a domains terminology. This paper presents an approach to augment corpus-based ontology learning by considering terms from collaborative tagging systems, social networking platforms, and micro-blogging services. The proposed framework collects information on the domains terminology from domain documents and a seed ontology in a triple store. Data from social sources such as Delicious, Flickr, Technorati and Twitter provide an outside view of the domain and help incorporate external knowledge into the ontology learning process. The neural network technique of spreading activation is used to identify relevant new concepts, and to determine their positions in the extended ontology. Evaluating the method with two measures (PMI and expert judgements) demonstrates the significant benefits of social evidence sources for ontology learning.
knowledge acquisition, modeling and management | 2014
Florian Hanika; Gerhard Wohlgenannt; Marta Sabou
Crowdsourcing techniques have been shown to provide effective means for solving a variety of ontology engineering problems. Yet, they are mainly being used as external means to ontology engineering, without being closely integrated into the work of ontology engineers. In this paper we investigate how to closely integrate crowdsourcing into ontology engineering practices. Firstly, we show that a set of basic crowdsourcing tasks are used recurrently to solve a range of ontology engineering problems. Secondly, we present the uComp Protege plugin that facilitates the integration of such typical crowdsourcing tasks into ontology engineering work from within the Protege ontology editing environment. An evaluation of the plugin in a typical ontology engineering scenario where ontologies are built from automatically learned semantic structures, shows that its use reduces the working times for the ontology engineers 11 times, lowers the overall task costs with 40% to 83% depending on the crowdsourcing settings used and leads to data quality comparable with that of tasks performed by ontology engineers.
atlantic web intelligence conference | 2007
Michael Granitzer; Arno Scharl; Albert Weichselbraun; Thomas Neidhart; Andreas Juffinger; Gerhard Wohlgenannt
Semantic Web technologies in general and ontology-based approaches in particular are considered the foundation for the next generation of information services. While ontologies enable software agents to exchange knowledge and information in a standardized, intelligent manner, describing today\(\check{\rm S}\)s vast amount of information in terms of ontological knowledge remains a challenge.
Sprachwissenschaft | 2016
Gerhard Wohlgenannt; Marta Sabou; Florian Hanika
Crowdsourcing techniques provide effective means for solving a variety of ontology engineering problems. Yet, they are mainly used as external support to ontology engineering, without being closely integrated into the work of ontology engineers. In this paper we investigate how to closely integrate crowdsourcing into ontology engineering practices. Firstly, we show that a set of basic crowdsourcing tasks are used recurrently to solve a range of ontology engineering problems. Secondly, we present the uComp Protege plugin that facilitates the integration of such typical crowdsourcing tasks into ontology engineering from within the Protege ontology editor. An evaluation of the plugin in a typical ontology engineering scenario where ontologies are built from automatically learned semantic structures, shows that its use reduces the working times for the ontology engineers 11 times, lowers the overall task costs by 40% to 83% depending on the crowdsourcing settings used and leads to data quality comparable with that of tasks performed by ontology engineers. Evaluations on a large anatomy ontology confirm that crowdsourcing is a scalable and effective method: good quality results (accuracy of 89% and 99%) are obtained while achieving cost reductions of 75% from the ontology engineer costs and providing comparable overall task duration.
hawaii international conference on system sciences | 2011
Albert Weichselbraun; Gerhard Wohlgenannt; Arno Scharl
Recent research shows the potential of utilizing data collected through Web 2.0 applications to capture domain evolution. Relying on external data sources, however, often introduces delays due to the time spent retrieving data from these sources. The method introduced in this paper streamlines the data acquisition process by applying optimal stopping theory. An extensive evaluation demonstrates how such an optimization improves the processing speed of an ontology refinement component which uses Delicious to refine ontologies constructed from unstructured textual data while having no significant impact on the quality of the refinement process. Domain experts compare the results retrieved from optimal stopping with data obtained from standardized techniques to assess the effect of optimal stopping on data quality and the created domain ontology.
european semantic web conference | 2015
Gerhard Wohlgenannt
Ontology learning OL aims at the semi-automatic acquisition of ontologies from sources of evidence, typically domain text. Recently, there has been a trend towards the application of multiple and heterogeneous evidence sources in OL. Heterogeneous sources provide benefits, such as higher accuracy by exploiting redundancy across evidence sources, and including complementary information. When using evidence sources which are heterogeneous in quality, amount of data provided and type, then a number of questions arise, for example: How many sources are needed to see significant benefits from heterogeneity, what is an appropriate number of evidences per source, is balancing the number of evidences per source important, and to what degree can the integration of multiple sources overcome low quality input of individual sources? This research presents an extensive evaluation based on an existing OL system. It gives answers and insights on the research questions posed for the OL task of concept detection, and provides further hints from experience made. Among other things, our results suggest that a moderate number of evidences per source as well as a moderate number of sources resulting in a few thousand data instances are sufficient to exploit the benefits of heterogeneous evidence integration.
international conference on move to meaningful internet systems | 2007
Albert Weichselbraun; Arno Scharl; Wei Liu; Gerhard Wohlgenannt
Ontology evolution is an intrinsic phenomenon of any knowledge-intensive system, which can be addressed either implicitly or explicitly. This paper describes an explicit, data-driven approach to capture and visualize ontology evolution by semi-automatically extending small seed ontologies. This process captures ontology changes reflected in large document collections. The visualization of these changes helps characterize the evolution process, and distinguish core, extended and peripheral relations between concepts.