Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samuel E. Moelius is active.

Publication


Featured researches published by Samuel E. Moelius.


Machine Learning | 2008

U-shaped, iterative, and iterative-with-counter learning

John Case; Samuel E. Moelius

This paper solves an important problem left open in the literature by showing that U-shapes are unnecessary in iterative learning from positive data. A U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Iterative learning is a Gold-style learning model in which each of a learner’s output conjectures depends only upon the learner’s most recent conjecture and input element. Previous results had shown, for example, that U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct learning.Work on the aforementioned problem led to the consideration of an iterative-like learning model, in which each of a learner’s conjectures may, in addition, depend upon the number of elements so far presented to the learner. Learners in this new model are strictly more powerful than traditional iterative learners, yet not as powerful as full explanatory learners. Can any class of languages learnable in this new model be learned without U-shapes? For now, this problem is left open.


algorithmic learning theory | 2008

Optimal Language Learning

John Case; Samuel E. Moelius

Golds original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no otherlearner that: requires as littleof each presentation of each object in the class in order to identify that object, and, for somepresentation of someobject in the class, requires lessof that presentation in order to identify that object. Wiehagen considered this notion in the context of functionlearning, and characterizedan optimal function learner as one that is class-preserving, consistent, and (in a very strong sense) non-U-shaped, with respect to the class of functions learned. Herein, Golds notion is considered in the context of languagelearning. Intuitively, a language learner identifies a class of languages optimally iff there is no other learner that: requires as little of each textfor each language in the class in order to identify that language, and, for some text for some language in the class, requires less of that text in order to identify that language. Many interesting results concerning optimal language learners are presented. First, it is shown that a characterization analogous to Wiehagens does nothold in this setting. Specifically, optimality is notsufficient to guarantee Wiehagens conditions; though, those conditions aresufficient to guarantee optimality. Second, it is shown that the failure of this analog is notdue to a restriction on algorithmic learning power imposed by non-U-shapedness (in the strong form employed by Wiehagen). That is, non-U-shapedness, even in this strong form, does notrestrict algorithmic learning power. Finally, for an arbitrary optimal learner F of a class of languages


Theory of Computing Systems \/ Mathematical Systems Theory | 2009

Characterizing Programming Systems Allowing Program Self-Reference

John Case; Samuel E. Moelius

\mathcal {L}


Theoretical Computer Science | 2009

Parallelism increases iterative learning power

John Case; Samuel E. Moelius

, it is shown that F optimally identifies a subclass


algorithmic learning theory | 2007

Parallelism Increases Iterative Learning Power

John Case; Samuel E. Moelius

\mathcal {K}


Theoretical Computer Science | 2013

Learning without coding

Sanjay Jain; Samuel E. Moelius; Sandra Zilles

of


Theoretical Computer Science | 2010

Incremental learning with temporary memory

Sanjay Jain; Steffen Lange; Samuel E. Moelius; Sandra Zilles

\mathcal {L}


mathematical foundations of computer science | 2007

Properties complementary to program self-reference

John Case; Samuel E. Moelius

iff F is class-preserving with respect to


algorithmic learning theory | 2008

Learning with Temporary Memory

Steffen Lange; Samuel E. Moelius; Sandra Zilles

\mathcal {K}


algorithmic learning theory | 2011

learning without coding

Samuel E. Moelius; Sandra Zilles

.

Collaboration


Dive into the Samuel E. Moelius's collaboration.

Top Co-Authors

Avatar

John Case

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steffen Lange

Darmstadt University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge