Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rolf Wiehagen is active.

Publication


Featured researches published by Rolf Wiehagen.


algorithmic learning theory | 1991

Polynomial-time inference of arbitrary pattern languages

Steffen Lange; Rolf Wiehagen

A pattern is a finite string of constants and variables (cf. [1]). The language of a pattern is the set of all strings which can be obtained by substituting non-null strings of constants for the variables of the pattern. In the present paper, we consider the problem of learning pattern languages from examples. As a main result we present an inconsistent polynomial-time algorithm which identifies every pattern language in the limit. Furthermore, we investigate inference of arbitrary pattern languages within the framework of learning from good examples. Finally, we show that every pattern language can be identified in polynomial time from polynomially many disjointness queries, only.


Information Sciences | 1980

Research in the theory of inductive inference by GDR mathematicians-A survey

Reinhard Klette; Rolf Wiehagen

Recent results in the theory of inductive inference are summarized. They concern deciphering of automata, language identification, prediction of functions, inference with additional information, strategies, functionals, index sets, characterization of identification types, uniform inference, and inference of nonrandom sequences. For proofs and further results in the field of inductive inference due to mathematicians of the German Democratic Republic a detailed bibliography is included.


Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic | 1990

A Thesis in Inductive Inference

Rolf Wiehagen

Inductive inference is the theory of identifying recursive functions from examples. In [26], [27], [30] the following thesis was stated: Any class of recursive functions which is identifiable at all can always be identified by an enumeratively working strategy. Moreover, the identification can always be realized with respect to a suitable nonstandard (i.e. non-Godel) numbering. We review some of the results which have led us to state this thesis. New results are presented concerning monotonic identification and corroborating the thesis. Some of the consequences of the thesis are discussed involving the development of the theory of inductive inference during the last decade. Problems of further investigation as well as further applications of non-Godel numberings in inductive inference are summarized.


Theoretical Computer Science | 1983

On the power of probabilistic strategies in inductive inference

Rolf Wiehagen; Rusins Freivalds; Efim B. Kinber

Inductive inference of programs of recursive functions from input/output examples by probabilistic strategies with an a priori bound n ϵ N of changes of hypotheses is investigated. Advantages of probabilistic strategies over deterministic ones are shown concerning their power in principle (for every n ⩾ 2, with probability arbitrarily close to 1, inference of function classes which cannot be inferred by any deterministic strategy with n changes of hypotheses), as well as their computational complexity (linear speed-up of the number of changes of hypotheses necessary for inference by deterministic strategies).


Theoretical Computer Science | 1993

On the power of inductive inference from good examples

Rusins Freivalds; Efim B. Kinber; Rolf Wiehagen

The usual information in inductive inference available for the purposes of identifying an unknown recursive function f is the set of all input/output examples (x,f(x)),n eN. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the identification has to be realized, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.


algorithmic learning theory | 1992

From Inductive Inference to Algorithmic Learning Theory

Rolf Wiehagen

We present two phenomena which were discovered in pure recursion-theoretic inductive inference, namely inconsistent learning (learning strategies producing apparently “senseless” hypotheses can solve problems unsolvable by “reasonable” learning strategies) and learning from good examples (“much less” information can lead to much more learning power). Recently, it has been shown that these phenomena also hold in the world of polynomial-time algorithmic learning. Thus inductive inference can be understood and used as a source of potent ideas guiding both research and applications in algorithmic learning theory.


algorithmic learning theory | 1996

Learning by Erasing

Steffen Lange; Rolf Wiehagen; Thomas Zeugmann

Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem.


algorithmic learning theory | 1994

Language Learning from Good Examples

Steffen Lange; Jochen Nessel; Rolf Wiehagen

We study learning of indexable families of recursive languages from good examples. We show that this approach is considerably more powerful than learning from all examples and point out reasons for this additional power. We present several characterizations of types of learning from good examples. We derive similarities as well as differences to learning of recursive functions from good examples.


International Journal of Foundations of Computer Science | 1997

Classifying Predicates and Languages

Carl H. Smith; Rolf Wiehagen; Thomas Zeugmann

The present paper studies a particular collection of classification problems, i.e., the classification of recursive predicates and languages, for arriving at a deeper understanding of what classification really is. In particular, the classification of predicates and languages is compared with the classification of arbitrary recursive functions and with their learnability. The investigation undertaken is refined by introducing classification within a resource bound resulting in a new hierarchy. Furthermore, a formalization of multi-classification is presented and completely characterized in terms of standard classification. Additionally, consistent classification is introduced and compared with both resource bounded classification and standard classification. Finally, the classification of families of languages that have attracted attention in learning theory is studied, too.


Journal of Computer and System Sciences | 1995

On Learning Multiple Concepts in Parallel

Efim B. Kinber; Carl H. Smith; Mahendran Velauthapillai; Rolf Wiehagen

A class U of recursive functions is said to be finitely (a, b) learnable if and only if for any b tuple of pairwise distinct functions from U at least a of the b functions have been learned correctly from examples of their behavior after some finite amount of time. It is shown that this approach, called learning in parallel, is more powerful than nonparallel learning. Furthermore, it is shown that imposing the restriction (called parallel super learning) on parallel learning that the learning algorithm also identiy on which of the input functions it is successful is still more powerful than nonparallel learning, A necessary and sufficient condition is derived for (a, b) superlearning and (c, d) superlearning being the same power. Our new notion of parallel learning is compared with other, previously defined notions of learning in parallel. Finally, we synthesize our notion of learning in parallel with the concept of team learning and obtain some interesting trade-offs and comparisons.

Collaboration


Dive into the Rolf Wiehagen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Nessel

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christophe Papazian

École normale supérieure de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge