Christina Feilmayr
Johannes Kepler University of Linz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christina Feilmayr.
Proceedings of the 1st Workshop on Context, Information and Ontologies | 2009
Robert Barta; Christina Feilmayr; Birgit Pröll; Christoph Grün; Hannes Werthner
Personalized online tourism services play a crucial role for tourists. In order to deliver adequate information, a semantic matching between tourism services and user context is needed. In the phase of trip planning, the essential user context comprises primarily user preferences and interests, while during the trip location and time context are added. While being on the move, unexpected events might force tourists to completely reschedule their travel plan and look for alternatives. In order to facilitate semantic matching between alternative touristic sites and user context, a specific vocabulary for the tourism domain, user type, time and location is needed. We demonstrate in this paper that existing tourism ontologies can hardly fulfill this goal as they mainly focus on domain concepts. The goal of this paper is to provide an alternative approach for covering the semantic space of tourism through the integration of modularized ontologies, such as user, W3C Time or W3C Geo, that center around a core domain ontology for the tourism sector.
data and knowledge engineering | 2016
Christina Feilmayr; Wolfram Wöß
Abstract Ontologies have been less successful than they could be in large-scale business applications due to a wide variety of interpretations. This leads to confusion, and consequently, people from various research communities use the term with different – sometimes incompatible – meanings. This research work analyzes and clarifies the term ontology and points out its difference from taxonomy. By way of two business case studies, both their potential in ontological engineering and the perceived requirements for ontologies are highlighted, and their misuse in research and business is discussed. In order to examine the case for applying ontologies in a specific domain or use case, the main benefits of using ontologies are defined and categorized as technical-centered or user-centered. Key factors that influence the use of ontologies in business applications are derived and discussed. Finally, the paper offers a recommendation for efficiently applying ontologies, including adequate representation languages and an ontological engineering process supported by reference ontologies. To answer the questions of when ontologies should be used, how they can be used efficiently, and when they should not be used, we propose guidelines for selecting an appropriate model, methodology, and tool set to meet customer requirements while making most efficient use of resources.
database and expert systems applications | 2011
Christina Feilmayr
Information extraction (IE) and knowledge discovery in databases (KDD) are both useful approaches for discovering information in textual corpora, but they have some deficiencies. Information extraction can identify relevant sub-sequences of text, but is usually unaware of emerging, previously unknown knowledge and regularities in a text and thus cannot form new facts or new hypotheses. Complementary to information extraction, emerging data mining methods and techniques promise to overcome the deficiencies of information extraction. This research work combines the benefits of both approaches by integrating data mining and information extraction methods. The aim is to provide a new high-quality information extraction methodology and, at the same time, to improve the performance of the underlying extraction system. Consequently, the new methodology should shorten the life cycle of information extraction engineering because information predicted in early extraction phases can be used in further extraction steps, and the extraction rules developed require fewer arduous test-and-debug iterations. Effectiveness and applicability are validated by processing online documents from the areas of eHealth and eRecruitment
database and expert systems applications | 2013
Christina Feilmayr
Incomplete information in web intelligence applications has serious consequences: inaccurate statements predominate, resulting primarily in erroneous annotations and ultimately in inaccurate reasoning on the web. This research work focuses on improving the completeness of extraction results by applying judiciously selected assessment methods to information extraction within the principle of complementarity. On the one hand, this paper discusses several requirements an assessment method must meet in terms of process ability and profitability to guarantee effective operation in a complementarity approach. On the other hand, it proposes a recommendation model to guide an IE system designer in selecting the appropriate methods for optimizing web data quality. The paper concludes with an application scenario that supports the theoretical approach.
international semantic web conference | 2012
Christina Feilmayr
Incomplete templates (attribute-value-pairs) and loss of structural and/or semantic information in information extraction tasks lead to problems in downstream information processing steps. Methods such as emerging data mining techniques that help to overcome this incompleteness by obtaining new, additional information are consequently needed. This research work integrates data mining and information extraction methods into a single complementary approach in order to benefit from their respective advantages and reduce incompleteness in information extraction. In this context, complementarity is the combination of pieces of information from different sources, resulting in (i) reassessment of contextual information and suggestion generation and (ii) better assessment of plausibility to enable more precise value selection, class assignment, and matching. For these purposes, a recommendation model that determines which methods can attack a specific problem is proposed. In conclusion, the improvements in information extraction domain analysis will be evaluated.
database and expert systems applications | 2012
Christina Feilmayr
Low information quality is one of the reasons why information extraction initiatives fail. Incomplete information has a pervasive negative impact on downstream processing steps. This work addresses this problem with a novel information extraction approach, which integrates data mining and information extraction methods into a single complementary approach in order to benefit from their respective advantages and reduce incompleteness in information extraction. In this context, various types of incompleteness are identified and an approach to their automatic detection is presented. Further, a prototype generic framework that incorporates the complementarity approach is proposed.
database and expert systems applications | 2012
Christina Feilmayr; Klaudija Vojinovic; Birgit Pröll
Information extraction systems are developed for various specific application domains to manage an increasing amount of unstructured data. The majority build either upon the knowledge-based approach, which promises high accuracy but involves labour-intensive coding of extraction rules, or upon the automatically trainable systems approach, which produces highly portable solutions but requires an appropriate learning set. In this paper, we present results of a project that aims to provide a new methodology which combines the knowledge-based and the machine learning approach into a hybrid one in order to compensate for their respective shortcomings and to achieve high IE performance. Firstly, we propose the idea of a multi-dimensional space that guides users in selecting appropriate methods, i.e., different hybrid concepts, depending on the extraction task and the level of available features. Secondly, we provide the concept of one hybrid approach, namely the sequential processing of a knowledge-based approach and a selection of different machine learning methods. Thirdly, we present the evaluation of an implementation of the sequential extraction on a curriculum vitae corpus. Thus, we provide first results for filling the multi-dimensional space for hybrid information extraction.
international conference on computers helping people with special needs | 2010
Bernhard Dürnegger; Christina Feilmayr; Wolfram Wöß
Web content, such as text, graphics, audio and video, should be available and accessible for everybody, but especially for disabled and elderly people. Graphics (like figures, diagrams, maps, charts and images), above all graphics with a high explanatory and content value, still may constitute massive barriers for specific user groups. Using Scalable Vector Graphics (SVG), an open standard published by the World Wide Web Consortium (W3C), provides new possibilities for the accessibility of web sites. SVG is based on XML and consequently gains important advantages like search and index functions. Moreover, SVG files can be processed by tactile displays or screen readers. To support the authoring process of graphics in order to make them accessible guidelines are developed, which empower the authors to detect and evaluate potential barriers in their own SVGs. An additional software-based evaluation tool conducts the authors in fulfilling the accessibility guidelines and simplifies the process of making SVG documents usable and valuable for as many people as possible.
Praxis Der Wirtschaftsinformatik | 2009
Christina Feilmayr; Birgit Pröll
ZusammenfassungenInformationsextraktion (IE) aus touristischen Websites stellt für zahlreiche touristische Anwendungen eine Alternative zur manuellen Datenerfassung dar. Touristische Websites weisen dabei spezifische Herausforderungen an die Webinformationsextraktion auf, z.B. natürlichsprachige Beschreibungen touristischer Angebote, heterogene Struktur der Webseiten oder komplexe Preisstrukturen. Aktuelle Systeme realisieren dabei ontologiebasierte Informationsextraktion (OBIE) und verwenden eine (existente) touristische Domänenontologie als Wissensbasis. Die Diskussion und Evaluierung des im Beitrag vorgestellten Systems TourIE zeigen, dass ontologiebasierte Informationsextraktion im eTourismus vielversprechend ist, diese jedoch nur eine semiautomatische Maβnahme sein kann.
data warehousing and knowledge discovery | 2014
Thomas Leitner; Christina Feilmayr; Wolfram Wöß
Manufacturing industry has come to recognize the potential of the data it generates as an information source for quality management departments to detect potential problems in the production as early and as accurately as possible. This is essential for reducing warranty costs and ensuring customer satisfaction. One of the greatest challenges in quality management is that the amount of data produced during the development and manufacturing process and in the after sales market grows rapidly. Thus, the need for automated detection of meaningful information arises. This work focuses on enhancing quality management by applying data mining approaches and introduces: (i) a meta model for data integration; (ii) a novel company internal analysis method which uses statistics and data mining to process the data in its entirety to find interesting, concealed information; and (iii) the application Q-AURA (quality - abnormality and cause analysis), an implementation of the concepts for an industrial partner in the automotive industry.