Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Case is active.

Publication


Featured researches published by John Case.


Theoretical Computer Science | 1983

Comparison of identification criteria for machine inductive inference

John Case; Carl H. Smith

A natural w LO t 1 hierarchy of successively more general criteria of success for inductive inference machines is described based on the size of sets of anomalies in programs synthesized by such machines. These criteria are compared to others in the literature. Some of our results are interpreted as tradeoff results or as showing the inherent relativeacomputational complexity of certain processes and others are interpreted from a positivistic, mechanistic philosophical stance as theorems in philosophy of science. The techniques of recursive function thcorp are cmplolr;ed including ordinary and infinitary recursion *?eorems.


international colloquium on automata, languages and programming | 1982

Machine Inductive Inference and Language Identification

John Case; Christopher Lynes

We show that for some classes ℒ of recursive languages, from the characteristic function of any L in ℒ an approximate decision procedure for L with no more than n+1 mistakes can be (uniformly effectively) inferred in the limit; whereas, in general, a grammar (generation procedure) with no more than n mistakes cannot; for some classes an infinite sequence of perfectly correct decision procedures can be inferred in the limit, but single grammars with finitely many mistakes cannot; and for some classes an infinite sequence of decision procedures each with no more than n+1 mistakes can be inferred, but an infinite sequence of grammars each with no more than n mistakes cannot. This is true even though decision procedures generally contain more information than grammars. We also consider inference of grammars for r.e. languages from arbitrary texts, i.e., enumerations of the languages. We show that for any class of languages ℒ, if some, machine, from arbitrary texts for any L in ℒ, can infer in the limit an approximate grammar for L with no more than 2·n mistakes, then some machine can infer in the limit, for each language in ℒ, an infinite sequence of grammars each with no more than n mistakes. This reduction from 2·n to n is best possible. From these and other results we obtain and compare several natural, inference hierarchies. Lastly we show that if we restrict ourselves to recursive texts, there is a machine which, for any r.e. language, infers in the limit an infinite sequence of grammars each with only finitely many mistakes. We employ recursion theoretic methods including infinitary and ordinary recursion theorems.


Theory of Computing Systems \/ Mathematical Systems Theory | 1974

Periodicity in generations of automata

John Case

A class of automata which build other automata is defined. These automata are called Turing machine automata because each one contains a Turing machine which acts as its computer-brain and which completely determines what its offspring, if any, will be. We show that for the descendants of an arbitrary progenitor Turing machine automaton there are exactly three possibilities: (1) there is a sterile descendant after an arbitrary number of generations, (2) after a delay of an arbitrary number of generations, the descendants repeat in generations with an arbitrary period, or (3) the descendants are aperiodic. We also show what sort of computing ability may be realized by the descendants in each of the possibilities. Furthermore, it is determined whether there are effective procedures for distinguishing between the various possibilities, and the exact degree of unsolvability is computed for those decision problems for which there is no effective procedure. Lastly, we discuss the relevance of the results to biology and pose several questions.


SIAM Journal on Computing | 1999

The Power of Vacillation in Language Learning

John Case

Some extensions are considered of Golds influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n + 1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle provides a necessary condition for avoiding overgeneralization in learning from positive data. It is applied to prove another theorem to the effect that one can optimally eliminate half of the mistakes from final programs for vacillatory criteria if one is willing to converge in the limit to infinitely many different programs instead. Child language learning may be sensitive to the order or timing of data presentation. It is shown, though, that for the vacillatory success criteria of this paper, there is no loss of learning power for machines which are insensitive to order in several ways simultaneously. For example, partly set-driven machines attend only to the set and length of sequence of positive data, not the actual sequence itself. A machine M is weakly n-ary order independent


Information & Computation | 1999

Incremental concept learning for bounded data mining

John Case; Sanjay Jain; Steffen Lange; Thomas Zeugmann

{\stackrel{\rm def}\Leftrightarrow}


Journal of Experimental and Theoretical Artificial Intelligence | 1994

Infinitary self-reference in learning theory

John Case

for each language L on which, for some ordering of the positive data about L, M converges in the limit to a finite set of grammars, there is a finite set of grammars D (of cardinality


Journal of Computer and System Sciences | 1995

Language Learning with Some Negative Information

Ganesh R. Baliga; John Case; Sanjay Jain

\leq n


Information & Computation | 2008

When unlearning helps

Ganesh R. Baliga; John Case; Wolfgang Merkle; Frank Stephan; Rolf Wiehagen

) such that M converges to a subset of this same D for each ordering of the positive data for L. The theorem most difficult to prove in the paper implies that machines which are simultaneously partly set-driven and weakly n-ary order independent do not lose learning power for converging in the limit to up to n grammars. Several variants of this theorem are obtained by modifying its proof, and some of these variants have application in this and other papers. Along the way it is also shown, for the vacillatory criteria, that learning power is not increased if one restricts the sequence of positive data presentation to be computable. Some of these results are nontrivial lifts of prior work for the n=1 case done by the Blums; Wiehagen; Osherson, Stob, and Weinstein; Schafer; and Fulk.


Information & Computation | 1999

The synthesis of language learners

Ganesh R. Baliga; John Case; Sanjay Jain

Important refinements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every infinite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data items coming in. In k-bounded example-memory inference (k is a priori fixed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of k-feedback identification, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to find out, for each of the k items, whether or not it is in the database (k is again a priori fixed). In all cases, the sequence of conjectures has to converge to a hypothesis correctly describing the target concept. Our results are manyfold. An infinite hierarchy of more and more powerful feedback learners in dependence on the number k of queries allowed to be asked is established. However, the hierarchy collapses to 1-feedback inference if only indexed families of infinite concepts are considered, and moreover, its learning power is then equal to learning in the limit. But it remains infinite for concept classes of only infinite r.e. concepts. Both k-feedback inference and k-bounded example-memory identification are more powerful than iterative learning but incomparable to one another. Furthermore, there are cases where redundancy in the hypothesis space is shown to be a resource increasing the learning power of iterative learners. Finally, the union of at most k pattern languages is shown to be iteratively inferable.


conference on learning theory | 1997

Learning Recursive Functions from Approximations

John Case; Susanne Kaufmann; Efim B. Kinber; Martin Kummer

Abstract Kleenes second recursion theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self-copy and then runs p on that self-copy together with any externally given input. e(p), in effect, has complete (low level), self-knowledge, and p represents how e(p) uses its self-knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self-copy outside of itself. One mechanism to achieve this creation is a self-replication trick isomorphic to that employed by single-celled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleenes theorem which he called the operator recursion theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low-level models of themselves and the other programs in the...

Collaboration


Dive into the John Case's collaboration.

Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Frank Stephan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Arun Sharma

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timo Kötzing

Hasso Plattner Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Carlucci

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge