Robert P. Daley
University of Pittsburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert P. Daley.
Information & Computation | 1986
Robert P. Daley; Carl H. Smith
The notion of the complexity of performing an inductive inference is defined. Some examples of the tradeoffs between the complexity of performing an inference and the accuracy of the inferred result are presented. An axiomatization of the notion of the complexity of inductive inference is developed and several results are presented which both resemble and contrast with results obtainable for the axiomatic computational complexity of recursive functions.
conference on learning theory | 1992
Robert P. Daley; Bala Kalyanasundaram; Mahendran Velauthapillai
We show that for every probabilistic FIN-type learner with success ratio greater than 24/49, there is another probabilistic FIN-type learner with success ratio 1/2 that simulates the former. We will also show that this simulation result is tight. We obtain as a consequence of this work a characterization of FIN-type team learning with success ratio between 24/49 and 1/2. We conjecture that the learning capabilities of probabilistic FIN-type learners for probabilities beginning at probability 1/2 are delimited by the sequence 8n/17n-2 for n > 2, which has an accumulation point at 8/17.
Theoretical Computer Science | 1983
Robert P. Daley
In this paper we show that it is always possibk to reduce errors for some forms of inductive Inference by increasing the numller of machines involved in the inference process. Moreover, we obtain precise bounds for the number of machines required to reduce any given number of errors. The type of inference we consider here was originally defined by Barzdin [l] (called GN” inference 1, and later independently by Case [2] (called BC inference) who expanded th,definition to inciude the inference of programs which are allowed a finite numhcr of errors (called BC’” inference). This latter type of inference has been studied extensively by Case and Smith [2,4]. We use here the definitions and notations from Case and Smith. We say that an inductive inference machine M BC”’ idcrztific~ ;1 total function f (written f~ BC”‘(MH if and only if when M is successively fed the graph of f’ as input it outputs over time a sequence of programs po, pI, . . . , such that NXi ) M,l, = “’ f], where g =“I /z means that g and Cz disagree at at most HZ places. Thus, when M is presented a function f it is permitted to change its mind infinitely often so long as eventually each 0;’ its conjectures contain at most oz errors. Moreover, if for some integer k s m at most finitely many of the programs p, in the above sequence have more than k ‘errors of commission’ (i.e., an error at a location .Y such that &,(x) is defined and unequal to f(_x )I, then we write f E BC”*‘k (M 1. We will assume without loss of generality that the machine M :Is always presented with fmite initial segments of the graph of f. We define the following classes,
conference on learning theory | 1991
Robert P. Daley; Leonard Pitt; Mahendran Velauthapillai; Todd Will
A typical way to increase the power of a learning paradigm is to allow randomization and require successful learning only with some probability p. Another standard approach is to allow a team of s learners working in parallel and to demand only that at least r of them correctly learn. These two variants are compared for the model of learning of total recursive functions where the learning algorithm is allowed an unbounded but finite amount of computation, and must halt with a correct program after receiving only a finite number of values of the function to be learned.
Journal of Experimental and Theoretical Artificial Intelligence | 1994
Robert P. Daley; Bala Kalyanasundaram; Mahe Velauthapillai
We consider the capabilities of probabilistic FIN-type learners who must always produce programs (i.e., hypotheses) that halt on every input. We show that the structure of the learning capability of probabilistic and team learning with success ratio above 1/2 in PFIN-type learning is analogous to the structure observed in FIN-type learning. On the contrary, the structure of probabilistic and team learning with success ratio at or below 1/2 is more sparse for PFIN-type learning than FIN-type learning. For n ≥2, we show that the probabilistic hierarchy below 1/2 for PFIN-type learning is defined by the sequence 4n/9n−2, which has an accumulation point at 4/9. We also show that the power of redundancy at the accumulation point 4/9 is different from the one observed at 1/2. More interestingly, for the first time, we show the power of redundancy even at points that are not accumulation points.
conference on learning theory | 1993
Robert P. Daley; Bala Kalyanasundaram
We consider the power of randomization in finite learning when a bounded number of mind changes are allowed. We show that in the ~m+2_3 range ( ~~+a _2, 1] the capability type of probabilistic as well as pluralistic FIN-type (also PFIN-type) learners is defined by the sequence
algorithmic learning theory | 1993
Robert P. Daley; Bala Kalyanasundaram
The main contribution of this paper is the development of analytical tools which permit the determination of team learning capabilities as well as the corresponding types of redundancy for Popperian FINite learning. The basis of our analytical framework is a reduction technique used previously by us.
Theoretical Computer Science | 1977
Robert P. Daley
We are concerned in this paper with an inference problem whit inductive inference or grammatical inference problem, but which does to the scienai c investigation of phenomena. In order to see this relationship we consider the following scenario. A scielrltist wishing to investigate a certain phenomenon performs an experiment and obtains some data. Now, in a sense this data is itself a description of the phenomenon, but the scientist is more interested in discovering some scientific law or principle which explains the phenomenon (or at least agrees with the experimental data). Let us presume that she is successful in formulating such a law. the law will be verified and become accepted by the scientific cornmum perhaps, this law will be incorporated into a broader, simpler principle. Of interest to us in the foregoing scenario is th? apparent concern on the part of the scientific community in finding ever more succinct descriptions (i.e., lawsj for phenomena. wow by no means is the shortest such clescriptiori the most convenient to use. Indeed, the application of very general principles to a concrete problem may ther large computational effort.
mathematical foundations of computer science | 1992
Robert P. Daley; Bala Kalyanasundaram
We consider the learning capabilities of FIN- type learners when they are allowed to change their hypothesis at most once. We consider probabilistic learners of this sort as well as pluralistic learners (i.e., teams of deterministic learners). We show for probabilities in the interval (5/6, 1] that the capabilities of the probabilistic learners are precisely divided into intervals of the form (5n+6/6n+7, 5n+1/6n+1] for all n≥0. We show that teams of such learners with plurality r/s (i.e., a team of s learners such that at least τ always successfully learn) are equivalent to probabilistic learners with probability of success τ/s. We also show that at the ratio 5/6 redundancy pays for team learners (i.e., a team with plurality 10/12 is more powerful than a team with plurality 5/6). Moreover, for any r and s with r/s=5/6 we show that any team of learners with plurality r/s is equivalent to a team with plurality 5/6 if r is odd, and to a team with plurality 10/12 if r is even.
conference on learning theory | 1993
Robert P. Daley; Bala Kalyanasundaram; Mahendran Velauthapillai
The type of learning which we consider is finite learning ( F]N-t ype) where a learner is permitted to conjecture only one program for the function which it is trying to learn. In this paper we investigate the relative learning capabilities of probabilistic and pluralistic learners when they are allowed to conjecture programs which have errors in them. Pluralistic learners are teams of learners which cooperate in trying to learn a function. We determine the exact point at which probabilistic learners are more powerful than deterministic (i.e., a team of size one) learners. The “bootstrapping technique” of Freivalds has been widely used in finite learning for determining the capabilities of probabilistic and team learners. However, when the learners are allowed to produce programs that may commit errors, then “bootstrapping” can not be employed. For probability p > ~, we show that a probabilistic learner with success probability p can be replaced with a deterministic learner. We also show that the cut-off point ~ is indeed tight. Quite surprisingly, in the case of PFIN-learning, the cut-off point is