Rainer Knauf
Technische Universität Ilmenau
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rainer Knauf.
systems man and cybernetics | 2002
Rainer Knauf; Avelino J. Gonzalez; Thomas Abel
We describe a complete methodology for the validation of rule-based expert systems. This methodology is presented as a five-step process that has two central themes: 1) to create a minimal set of test inputs that adequately cover the domain represented in the knowledge base; and 2) a Turing Test-like methodology that evaluates the systems responses to the test inputs and compares them to the responses of human experts. The development of minimal set of test inputs takes into consideration various criteria, both user-defined, and domain-specific. These criteria are used to reduce the potentially very large set of test inputs to one that is practical, keeping in mind the nature and purpose of the developed system. The Turing Test-like evaluation methodology makes use of only one panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts. The hypothesis being presented is that much can be learned about the experts themselves by having them anonymously evaluate each others responses to the same test inputs. Thus, we are better able to determine the validity of an expert system. Depending on its purpose, we introduce various ways to express validity as well as a technique to use the validity assessment for the refinement of the rule base. Lastly, we describe a partial implementation of the test input minimalization process on a small but nontrivial expert system. The effectiveness of the technique was evaluated by seeding errors into the expert system, generating the appropriate set of test inputs and determining whether the errors could be detected by the suggested methodology.
systems man and cybernetics | 1999
Rainer Knauf; Avelino J. Gonzalez; Klaus P. Jantke
This paper describes a complete methodology for the validation of rule-based expert systems. The methodology is presented as a 5-step process that has three central themes: creation of a minimal set of test inputs that adequately cover the domain represented in the knowledge base; a Turing test-like methodology that evaluates the systems responses to the test inputs and compares them to the responses of human experts; and use of the validation results for system improvement. The development of a minimal set of test inputs takes into consideration various criteria, both user-defined and domain-specific. These criteria are used to reduce the potentially very large exhaustive set of test inputs to one that is practical. The Turing test-like evaluation methodology makes use of a panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts in the validation panel. The hypothesis being presented is that much can be learned about the experts themselves by having them evaluate each others responses to the same test inputs anonymously. Thus, by carefully scrutinizing the results of each expert in relation to the other experts, we are better able to judge an evaluators expertise, and consequently, better determine the validity of an expert system.
congress on evolutionary computation | 2011
Yoshitaka Sakurai; Kouhei Takada; Natsuki Tsukamoto; Takashi Onoyama; Rainer Knauf; Setsuo Tsuruta
A delivery route optimization system greatly improves the real time delivery efficiency. To realize such an optimization, its distribution network requires solving several tens to hundreds (max. 1500–2000) cities Traveling Salesman Problems (TSP) within interactive response time (around 3 seconds) with expert-level accuracy (below 3% level of error rate). Moreover, as for the algorithms, understandability and flexibility are necessary because field experts and field engineers can understand and adjust it to satisfy the field conditions. To meet these requirements, a Backtrack and Restart Genetic Algorithm (Br-GA) is proposed. This method combines Backtracking and GA having simple heuristics such as 2-opt and NI (Nearest Insertion) so that, in case of stagflation, GA can restarts with the state of populations going back to the state in the generation before stagflation. Including these heuristics, field experts and field engineers can easily understand the way and use it. Using the tool applying their method, they can easily create/modify the solutions or conditions interactively depending on their field needs. Experimental results proved that the method meets the above-mentioned delivery scheduling requirements more than other methods from the viewpoint of optimality as well as simplicity.
international conference on advanced learning technologies | 2010
Rainer Knauf; Yoshitaka Sakurai; Kouhei Takada; Setsuo Tsuruta
A modeling approach for learning processes is utilized to process, evaluate and refine them. A formerly-developed concept called storyboarding has been applied at Tokyo Denki University (TDU) to model the various curricula for students to progress in their studies. Along with this particular storyboard, we developed a data mining technology to estimate chances for success for the students following each curricular path. Here, we introduce a concept of learner profiling. The profile represents the students’ individual properties, talents and preferences constructed through mining personal meta data about learning preferences.
systems, man and cybernetics | 2010
Yoshitaka Sakurai; Kouhei Takada; Natsuki Tsukamoto; Takashi Onoyama; Rainer Knauf; Setsuo Tsuruta
A delivery route optimization system greatly improves the real time delivery efficiency. To realize such an optimization, its distribution network requires solving several tens to hundreds (maximum 2 thousands or so) cities Traveling Salesman Problems (TSP) within interactive response time (around 3 seconds) with expert-level accuracy (below 3% level of error rate). To meet these requirements, an Inner Random Restart Genetic Algorithm (Irr-GA) method is proposed. This method combines random restart and GA that has different types of simple heuristics such as 2-opt and NI (Nearest Insertion). Including these heuristics, field experts and field engineers can easily understand the way and use it. Using the tool applying their method, they can easily create/modify the solutions or conditions interactively depending on their field needs. Experimental results proved that the method meets the above-mentioned delivery scheduling requirements more than other methods from the viewpoint of optimality as well as simplicity.
international conference on advanced learning technologies | 2007
Rainer Knauf; Yoshitaka Sakurai; Setsuo Tsuruta
Learning systems suffer from a lack of an explicit and adaptable didactic design. A way to overcome such deficiencies is (semi-) formally representing the didactic design. A modeling approach, storyboarding, is outlined here. Storyboarding is setting the stage to apply knowledge engineering technologies to verify, validate the didactics behind a learning process. As a vision, didactics can be refined according to revealed weaknesses and proven excellence. Furthermore, successful didactic patterns can be inductively inferred by analyzing the particular knowledge processing and its alleged contribution to learning success.
signal-image technology and internet-based systems | 2014
Takashi Kawabe; Takaaki Motomura; Masaki Suzuki; Yukiko Yamamoto; Setsuo Tsuruta; Yoshitaka Sakurai; Rainer Knauf
This paper introduces a Case Based Approximation method to solve large scale Traveling Salesman Problems in a short time (around 3 seconds) with an error rate below 3%. This method is based on the insight, that a majority of real world problems are very often similar to previous ones at least for route scheduling. Thus, a solution can be derived from former solutions as follows: (1) selecting a most similar TSP from a library (CB: Case Base) of former TSP solutions, (2) removing the locations that are not including in the newly given problem or TSP and (3) adding the new locations by Nearest Insertion (NI) and possibly adjusting by NI incorporated GA. This way of creating solutions by Case Based Reasoning (CBR) avoids the computational costs to create new solutions from scratch. The evaluation of this method revealed remarkable results. Though even the world fastest most optimal approximate TSP solving method LKH needed more than 3 seconds or the worst error rate exceeded 3 seconds, the worst error rate of the proposed method is less than 1 % within 3 seconds. This is about 10-100 times better than that of our former approach BR-GA (Backtrack and Restart type GA).
congress on evolutionary computation | 2015
Takashi Kawabe; Yuuta Kobayashi; Setsuo Tsuruta; Yoshitaka Sakurai; Rainer Knauf
Delivery route optimization is a well-known NPcomplete problem based on the Traveling Salesman Problem (TSP) involving 20-2000 cities though human oriented factors make the problem more complex. Despite of NP-completeness, the scheduling should be solved every time within interactive response time and below expert level error or local optimality, considering human oriented factors including personal, social, and cultural factors. To cope with this, Cases and NI (Nearest Insertion) are introduced into a Genetic Algorithm (GA), based on the insight that real problems are similar to previous ones. A solution can be derived from former solutions, considering human oriented factors as follows: (1) retrieving the most similar cases, (2) modifying them by removing and adding locations by NI, and (3) further optimizing them by a GA using only NI operations. This cannot only diminish the costs to compute new solutions from scratch but also inherit many parts of previous routes to respect human factors. Experimental evaluation revealed remarkable results. Though the most effective TSP solving method LKH needed more than 3 seconds, the proposed method yielded results within 3% of the worst error rate and in less than 3 seconds. Furthermore, the proposed method is able to inherit most of the delivery routes, while LKH leads to significant changes.
Archive | 2013
Setsuo Tsuruta; Rainer Knauf; Shinichi Dohi; Takashi Kawabe; Yoshitaka Sakurai
University has a complicated system of course offerings, registration rules, and prerequisite courses, which should be matched to students’ dynamic learning needs, and desires. We address this problem by developing an Educational-Learning System called “Dynamic Storyboarding System”. Besides modeling learning processes, this system aims at evaluating and refining university curricula to reach an optimum of learning success in terms of best possible ac-cumulative grade point average (GPA). This is performed by applying Educational Data Mining (EDM) to former students curricula and their degree of success (GPA) and thus, uncovering golden didactic knowledge for successful education. It consists of mining a decision tree (DT) and applying it to curricula planned by current students. Students receive an estimation of the GPA they are likely to receive along with a recommendation to supplement a partial path to reach optimal success. Our approach includes individual learner profiles. The profiling concept initially uses the per-university educational history and is dynamically extended by the students’ university study results. The profiles are used by applying the EDM technology to students with profiles of a high similarity to the student under consideration. A feasibility study showed the usefulness of the system. The effect has been validated by cross-validation with about 200 students’ records. The mean of the difference between the original grade point average (GPA) and the estimated one was 0.43 with a standard deviation of 0.30.
congress on evolutionary computation | 2015
Takashi Kawabe; Yoshimi Namihira; Kouta Suzuki; Munehiro Nara; Yoshitaka Sakurai; Setsuo Tsuruta; Rainer Knauf
To detect false information or rumors spread on Twitter on and after the Great East Japan Earthquake, a tweet credibility assessing method was proposed, based on the topic and opinion classification. The credibility is assessed by calculating the ratio of the same opinions to all opinions about a topic identified by topic models generated using Latent Dirichlet Allocation. To identify an opinion (positive or negative) about a tweet, sentiment analysis is performed using a semantic orientation dictionary. However, it is a kind of imbalanced data analysis to identify usually very few false tweets and the accuracy is a problem. The accuracy of the originally proposed method was susceptible since the sentiment opinion of most tweets was identified negative by the baseline (namely Takamuras) semantic orientation dictionary. To cope with this problem, a method for extracting sentiment orientations of words and phrases is also proposed to improve the evaluation for analyzing the credibility of tweet information. This method 1) evolutionally learns from a large amount of social data on Twitter, 2) focuses on adjective predicates, and 3) considers co-occurrences with negation expressions or multiple adjectives, between subjects and predicates, etc. The effects are proven by experiments using a large number of real tweets, in which we could detect rumor tweet much more accurately. In opposition to the baseline semantic dictionary, our method leads to succeed in imbalanced data analysis.