Sandra Zilles
University of Regina
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sandra Zilles.
Theoretical Computer Science | 2008
Steffen Lange; Thomas Zeugmann; Sandra Zilles
In the past 40 years, research on inductive inference has developed along different lines, e.g., in the formalizations used, and in the classes of target concepts considered. One common root of many of these formalizations is Golds model of identification in the limit. This model has been studied for learning recursive functions, recursively enumerable languages, and recursive languages, reflecting different aspects of machine learning, artificial intelligence, complexity theory, and recursion theory. One line of research focuses on indexed families of recursive languages - classes of recursive languages described in a representation scheme for which the question of membership for any string in any of the given languages is effectively decidable with a uniform procedure. Such language classes are of interest because of their naturalness. The survey at hand picks out important studies on learning indexed families (including basic as well as recent research), summarizes and illustrates the corresponding results, and points out links to related fields such as grammatical inference, machine learning, and artificial intelligence in general.
Theoretical Computer Science | 2008
Thomas Zeugmann; Sandra Zilles
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Golds (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagens 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagens work, which has made him one of the most influential scientists in the theory of learning recursive functions.
Artificial Intelligence | 2013
Levi H. S. Lelis; Sandra Zilles; Robert C. Holte
Korf, Reid and Edelkamp initiated a line of research for developing methods (KRE and later CDP) that predict the number of nodes expanded by IDA* for a given start state and cost bound. Independently, Chen developed a method (SS) that can also be used to predict the number of nodes expanded by IDA*. In this paper we improve both of these prediction methods. First, we present @e-truncation, a method that acts as a preprocessing step and improves CDP@?s prediction accuracy. Second and orthogonally to @e-truncation, we present a variant of CDP that can be orders of magnitude faster than CDP while producing exactly the same predictions. Third, we show how ideas developed in the KRE line of research can be used to improve the predictions produced by SS. Finally, we make an empirical comparison between our new enhanced versions of CDP and SS. Our experimental results suggest that CDP is suitable for applications that require less accurate but fast predictions, while SS is suitable for applications that require more accurate predictions but can afford more computation time.
Artificial Intelligence | 2010
Sandra Zilles; Robert C. Holte
Abstraction is a powerful technique for speeding up planning and search. A problem that can arise in using abstraction is the generation of abstract states, called spurious states, from which the goal state is reachable in the abstract space but for which there is no corresponding state in the original space from which the goal state can be reached. Spurious states can be harmful, in practice, because they can create artificial shortcuts in the abstract space that slow down planning and search, and they can greatly increase the memory needed to store heuristic information derived from the abstract space (e.g., pattern databases). This paper analyzes the computational complexity of creating abstractions that do not contain spurious states. We define a property-the downward path preserving property (DPP)-that formally captures the notion that an abstraction does not result in spurious states. We then analyze the computational complexity of (i) testing the downward path preserving property for a given state space and abstraction and of (ii) determining whether this property is achievable at all for a given state space. The strong hardness results shown carry over to typical description languages for planning problems, including sas^+ and propositional strips. On the positive side, we identify and illustrate formal conditions under which finding downward path preserving abstractions is provably tractable.
conference on learning theory | 2004
Steffen Lange; Sandra Zilles
Different formal learning models address different aspects of human learning. Below we compare Gold-style learning—interpreting learning as a limiting process in which the learner may change its mind arbitrarily often before converging to a correct hypothesis—to learning via queries—interpreting learning as a one-shot process in which the learner is required to identify the target concept with just one hypothesis. Although these two approaches seem rather unrelated at first glance, we provide characterizations of different models of Gold-style learning (learning in the limit, conservative inference, and behaviourally correct learning) in terms of query learning. Thus we describe the circumstances which are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learning indexable classes of recursive languages.
algorithmic learning theory | 2004
Steffen Lange; Sandra Zilles
Different formal learning models address different aspects of learning. Below we compare learning via queries—interpreting learning as a one-shot process in which the learner is required to identify the target concept with just one hypothesis—to Gold-style learning—interpreting learning as a limiting process in which the learner may change its mind arbitrarily often before converging to a correct hypothesis.
algorithmic learning theory | 2006
Sanjay Jain; Steffen Lange; Sandra Zilles
The present study aims at insights into the nature of incremental learning in the context of Golds model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way different consistency and conservativeness demands can affect the capabilities of incremental learners. These results may serve as a first step towards characterising the structure of typical classes learnable incrementally and thus towards elaborating uniform incremental learning methods.
Information Processing Letters | 2004
Steffen Lange; Sandra Zilles
A natural approach towards powerful machine learning systems is to enable options for additional machine/user interactions, for instance by allowing the system to ask queries about the concept to be learned. This motivates the development and analysis of adequate formal learning models. In the present paper, we investigate two different types of query learning models in the context of learning indexable classes of recursive languages: Angluins original model and a relaxation thereof, called learning with extra queries. In the original model the learner is restricted to query languages belonging to the target class, while in the new model it is allowed to query other languages, too. As usual, the following standard types of queries are considered: superset, subset, equivalence, and membership queries. The learning capabilities of the resulting query learning models are compared to one another and to different versions of Gold-style language learning from only positive data and from positive and negative data (including finite learning, conservative inference, and learning in the limit). A complete picture of the relation of all these models has been elaborated. A couple of interesting differences and similarities between query learning and Gold-style learning have been observed. In particular, query learning with extra superset queries coincides with conservative inference from only positive data. This result documents the naturalness of the new query model.
Information & Computation | 2005
Steffen Lange; Sandra Zilles
Different formal learning models address different aspects of human learning. Below we compare Gold-style learning-modelling learning as a limiting process in which the learner may change its mind arbitrarily often before converging to a correct hypothesis-to learning via queries-modelling learning as a one-shot process in which the learner is required to identify the target concept with just one hypothesis. In the Gold-style model considered below, the information presented to the learner consists of positive examples for the target concept, whereas in query learning, the learner may pose a certain kind of queries about the target concept, which will be answered correctly by an oracle (called teacher). Although these two approaches seem rather unrelated at first glance, we provide characterisations of different models of Gold-style learning (learning in the limit, conservative inference, and behaviourally correct learning) in terms of query learning. Thus we describe the circumstances which are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learning indexable classes of recursive languages. This analysis leads to an important observation, namely that there is a natural query learning type hierarchically in-between Gold-style learning in the limit and behaviourally correct learning. Astonishingly, this query learning type can then again be characterised in terms of Gold-style inference.
algorithmic learning theory | 2011
Michael Geilke; Sandra Zilles
Patterns provide a simple, yet powerful means of describing formal languages. However, for many applications, neither patterns nor their generalized versions of typed patterns are expressive enough. This paper extends the model of (typed) patterns by allowing relations between the variables in a pattern. The resulting formal languages are called Relational Pattern Languages (RPLs). We study the problem of learning RPLs from positive data (text) as well as the membership problem for RPLs. These problems are not solvable or not efficiently solvable in general, but we prove positive results for interesting subproblems. We further introduce a new model of learning from a restricted pool of potential texts. Probabilistic assumptions on the process that generates words from patterns make the appearance of some words in the text more likely than that of other words. We prove that, in our new model, a large subclass of RPLs can be learned with high confidence, by effectively restricting the set of likely candidate patterns to a finite set after processing a single positive example.