Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Zeugmann is active.

Publication


Featured researches published by Thomas Zeugmann.


Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report | 1995

A Guided Tour Across the Boundaries of Learning Recursive Languages

Thomas Zeugmann; Steffen Lange

The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data as well as from both, positive and negative data. We consider the influence of various monotonicity constraints to the learning process, and provide a thorough study concerning the influence of several parameters. In particular, we present examples pointing to typical problems and solutions in the field. Then we provide a unifying framework for learning. Furthermore, we survey results concerning learnability in dependence on the hypothesis space, and concerning order independence. Moreover, new results dealing with the efficiency of learning are provided. First, we investigate the power of iterative learning algorithms. The second measure of efficiency studied is the number of mind changes a learning algorithm is allowed to perform. In this setting we consider the problem whether or not the monotonicity constraints introduced do influence the efficiency of learning algorithms.


TAEBC-2009 | 2009

Stochastic Algorithms: Foundations and Applications

Osamu Watanabe; Thomas Zeugmann

Invited Papers.- Scenario Reduction Techniques in Stochastic Programming.- Statistical Learning of Probabilistic BDDs.- Regular Contributions.- Learning Volatility of Discrete Time Series Using Prediction with Expert Advice.- Prediction of Long-Range Dependent Time Series Data with Performance Guarantee.- Bipartite Graph Representation of Multiple Decision Table Classifiers.- Bounds for Multistage Stochastic Programs Using Supervised Learning Strategies.- On Evolvability: The Swapping Algorithm, Product Distributions, and Covariance.- A Generic Algorithm for Approximately Solving Stochastic Graph Optimization Problems.- How to Design a Linear Cover Time Random Walk on a Finite Graph.- Propagation Connectivity of Random Hypergraphs.- Graph Embedding through Random Walk for Shortest Paths Problems.- Relational Properties Expressible with One Universal Quantifier Are Testable.- Theoretical Analysis of Local Search in Software Testing.- Firefly Algorithms for Multimodal Optimization.- Economical Caching with Stochastic Prices.- Markov Modelling of Mitochondrial BAK Activation Kinetics during Apoptosis.- Stochastic Dynamics of Logistic Tumor Growth.


Journal of Computer and System Sciences | 1996

Incremental Learning from Positive Data

Steffen Lange; Thomas Zeugmann

The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Letcbe any concept; then every infinite sequence of elements exhaustingcis calledpositive presentationofc. An algorithmic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the concept to be learned. This basic scenario is referred to asiterative learning. Iterative inference can be refined by allowing the learner to store ana prioribounded number of carefully chosen examples resulting inbounded example memory inference. Additionally,feed-backidentification is introduced. Now, the learner is enabled to ask whether or not a particular element did already appear in the data provided so far. Our results are threefold: First, the learning capabilities of the various models of incremental learning are related to previously studied learning models. It is proved that incremental learning can be always simulated by inference devices that are both set-driven and conservative. Second, feed-back learning is shown to be more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference which itself extends that of iterative learning, too. In particular, the learning power of bounded example memory inference always increases if the number of examples the learner is allowed to store is incremented. Third, a sufficient condition for iterative inference allowingnon-enumerativelearning is provided. The results obtained provide strong evidence that there is no unique way to design superior incremental learning algorithms. Instead, incremental learning is the art of knowing what to overlook.


conference on learning theory | 1993

Language learning in dependence on the space of hypotheses

Steffen Lange; Thomas Zeugmann

We study the learnability of indexed families 4 = (Lj )jEFJ of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between ezact learnability (L has to be learnt with respect to the space L of hypotheses), class preserving learning (L has to be inferred with reqpect to some space ~ of hypotheses having the same range as L), and claw comprising inference (C has to be learnt with respect to some space 4; of hypothe ses that has a range comprising range(Z)). In particular, it is proved that, whenever monotonicit y requirements are involved, then exact learning is almost always weaker than clam preserving inference which itself turns out to be almost always weaker than class comprising learning. Next, we provide additionally insight into the problem under what conditions, for example, exact and class preserving learning procedures are of equal power, Finally, we deal with the question what kind of languages has to be added to the space of ~ypotheses in order to obtain superior learning algorithms.


Information & Computation | 1999

Incremental concept learning for bounded data mining

John Case; Sanjay Jain; Steffen Lange; Thomas Zeugmann

Important refinements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every infinite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data items coming in. In k-bounded example-memory inference (k is a priori fixed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of k-feedback identification, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to find out, for each of the k items, whether or not it is in the database (k is again a priori fixed). In all cases, the sequence of conjectures has to converge to a hypothesis correctly describing the target concept. Our results are manyfold. An infinite hierarchy of more and more powerful feedback learners in dependence on the number k of queries allowed to be asked is established. However, the hierarchy collapses to 1-feedback inference if only indexed families of infinite concepts are considered, and moreover, its learning power is then equal to learning in the limit. But it remains infinite for concept classes of only infinite r.e. concepts. Both k-feedback inference and k-bounded example-memory identification are more powerful than iterative learning but incomparable to one another. Furthermore, there are cases where redundancy in the hypothesis space is shown to be a resource increasing the learning power of iterative learners. Finally, the union of at most k pattern languages is shown to be iteratively inferable.


conference on learning theory | 1992

Types of monotonic language learning and their characterization

Steffen Lange; Thomas Zeugmann

The present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. We characterize strong-monotonic, monotonic, weak-monotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of Angluin (1980). Moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. In particular, it is proved that strong-monotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weak-monotonic inference cannot.


International Workshop on Nonmonotonic and Inductive Logic | 1991

Monotonic versus non-monotonic language learning

Steffen Lange; Thomas Zeugmann

In the present paper strong-monotonic, monotonie and weak-monotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data.


Journal of Experimental and Theoretical Artificial Intelligence | 1994

Ignoring data may be the only way to learn efficiently

Rolf Wiehagen; Thomas Zeugmann

Abstract In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e. it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.


Information & Computation | 1995

Characterizations of Monotonic and Dual Monotonic Language Learning

Thomas Zeugmann; Steffen Lange; Shyam Kapur

The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. The three versions of dual monotonicity describe the concept that the inference device has to produce specializations that fit better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic, and monotonic & dual monotonic, as well as finite language learning from positive data in terms of recursively generable finite sets. These characterizations provide a unifying framework for learning from positive data under the various monotonicity constraints. Moreover, they yield additional insight into the problem of what a natural learning algorithm should look like.


Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report | 1995

Learning and Consistency

Rolf Wiehagen; Thomas Zeugmann

In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it.

Collaboration


Dive into the Thomas Zeugmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Stephan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rolf Wiehagen

Humboldt State University

View shared research outputs
Top Co-Authors

Avatar

Rolf Wiehagen

Humboldt State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Werner Römisch

Humboldt University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge