Günther Fliedl
Alpen-Adria-Universität Klagenfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Günther Fliedl.
information and communication technologies in tourism | 2012
Dietmar Gräbner; Markus Zanker; Günther Fliedl; Matthias Fuchs
In this paper we propose a system that performs the classification of customer reviews of hotels by means of a sentiment analysis. We elaborate on a process to extract a domainspecific lexicon of semantically relevant words based on a given corpus (Scharl et al., 2003; Pak & Paroubek, 2010). The resulting lexicon backs the sentiment analysis for generating a classification of the reviews. The evaluation of the classification on test data shows that the proposed system performs better compared to a predefined baseline: if a customer review is classified as good or bad the classification is correct with a probability of about 90%.
data and knowledge engineering | 2005
Günther Fliedl; Christian Kop; Heinrich C. Mayr
Scenarios are a very popular means for describing and analyzing behavioral aspects on the level of natural language. In information systems design, they form the basis for a subsequent step of conceptual dynamic modeling. To enhance this step, linguistic instruments prove applicable for transforming scenarios into conceptual schemas of various models. This transformation usually consists of three steps: linguistic analysis, component mapping and schema construction. Within this paper we investigate to which extent these steps may be performed automatically in the framework of KCPM, a conceptual predesign model which is used as an Interlingua between natural language and arbitrary conceptual models.
data and knowledge engineering | 2000
Günther Fliedl; Christian Kop; Heinrich C. Mayr; Willi Mayerthaler; Christian Winkler
Abstract Usually, the development of an information system (or some part of it) starts with requirement elicitation followed by a phase of collecting and analyzing which results in a set of requirements specifications. As opposed to conventional conceptual modeling, where input texts are formalized, our approach suggests the idea of collecting and cataloguing natural language patterns in a non-textual form immediately after a linguistic analysis. This linguistic analysis is done according to the NTMS model. Collecting and cataloguing of natural language data is supported by KCPM.
applications of natural language to data bases | 2007
Günther Fliedl; Christian Kop; Jürgen Vöhringer
The ontology language OWL has become increasingly important during the previous years. However due to the uncontrolled growth, OWL ontologies in many cases are very heterogeneous with respect to the class and property labels that often lack a common and systematic view. For this reason we linguistically analyzed OWL class and property labels focusing on their implicit structure. Based on the results of this analysis we generated a first proposal for linguistically determined label generation which can be seen as a prerequisite for mapping OWL concepts to natural language patterns.
data and knowledge engineering | 2010
Günther Fliedl; Christian Kop; Jürgen Vöhringer
The ontology language OWL has become increasingly important during the previous years. However due to the uncontrolled growth, OWL ontologies in many cases are very heterogeneous with respect to the class and property labels that often lack a common and systematic view. For this reason we developed linguistically based guidelines for OWL class and property labels focusing on their implicit structure. Considering these guidelines we propose an evaluation mechanism including rules for comparing the linguistically triggered label interpretations to their OWL internal representations. Our proposal also includes the verbalization of these evaluated OWL labels.
international conference on data technologies and applications | 2006
Marcus Hassler; Günther Fliedl
Tokenization is commonly understood as the first step of any kind of natural language text preparation. The major goal of this early (pre-linguistic) task is to convert a stream of characters into a stream of processing units called tokens. Beyond the text mining community this job is taken for granted. Commonly it is seen as an already solved problem comprising the identification of word borders and punctuation marks separated by spaces and line breaks. But in our sense it should manage language related word dependencies, incorporate domain specific knowledge, and handle morphosyntactically relevant linguistic specificities. Therefore, we propose rulebased Extended Tokenization including all sorts of linguistic knowledge (e.g., grammar rules, dictionaries). The core features of our implementation are identification and disambiguation of all kinds of linguistic markers, detection and expansion of abbreviations, treatment of special formats, and typing of tokens including singleand multi-tokens. To improve the quality of text mining we suggest linguistically-based tokenization as a necessary step preceeding further text processing tasks. In this paper, we focus on the task of improving the quality of standard tagging.
WIT Transactions on Information and Communication Technologies | 2002
Günther Fliedl; Georg Weber
NIBA-TAG is some kind of multilevel natural language tagger with rich functionality. It functions as a word-stemmer, a morphological parser and a normal POS-Tagger, which uses syntactic und semantic features for contextually influenced word-tagging. Each rule is based on a ranking-mechanism which is currently related to the levels “fact”, “proposal” and “guess”. One of the postprocessing-units analyzes the ranking-structure and can change a “proposal” to a “fact”, if enough rules made an identical proposal for a word. The default output is XML, where the level of precision can be specified. So one could generate a XML-file only including the guesses, or a file with all attributes relevant for the status of a proposal.
applications of natural language to data bases | 2004
Günther Fliedl; Christian Kop; Heinrich C. Mayr; Christian Winkler; Georg Weber; Alexander Salbrechter
The paper outlines a multilevel tagging approach to the linguistic analysis of requirements texts. It is shown that extended semantic tagging including chunk-parsing of noun-groups and prepositional groups enables to identify structural items which can be mapped to the conceptual notions for dynamic modeling in KCPM, namely actor, operation-type and condition.
international conference natural language processing | 2005
Günther Fliedl; Christian Kop; Heinrich C. Mayr; Martin Hölbling; Thomas Horn; Georg Weber; Christian Winkler
This paper discusses an advanced tagging concept which supplies information that allows for a tool supported step by step mapping of natural language requirements specification to a conceptual (predesign) model.The focus lies on sentences containing conditions as used for describing alternatives in business process specifications. It is shown, how the tagging results are interpreted systematically such allowing for a stepwise model generation.
applications of natural language to data bases | 2000
Günther Fliedl; Christian Kop; Willi Mayerthaler; Heinrich C. Mayr; Christian Winkler
In this paper we discuss the linguistic base of standardized sentences as well as their structure and their employment in requirements analysis. Standardized sentences are quite helpful in the automatic analysis of requirements concerning static and dynamic aspects. Since standardized sentences require being filtered out of prose texts, which is a time consuming task we prefer to use standardized sentences in the case of requirements completion. Such completions inevitably emerge in the iterative process of requirements analysis. To enhance the process of requirements analysis we use a special approach called conceptual predesign the results of which are mapped by heuristic rules to conceptual design schemes, e.g. formulated in UML.