Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ziyuan Gao is active.

Publication


Featured researches published by Ziyuan Gao.


conference on computability in europe | 2013

On Conservative Learning of Recursively Enumerable Languages

Ziyuan Gao; Sanjay Jain; Frank Stephan

Conservative partial learning is a variant of partial learning whereby the learner, on a text for a target language L, outputs one index e with L = W e infinitely often and every further hypothesis d is output only finitely often and satisfies \(L \not\subseteq W_d\). The present paper studies the learning strength of this notion, comparing it with other learnability criteria such as confident partial learning, explanatory learning, as well as behaviourally correct learning. It is further established that for classes comprising infinite sets, conservative partial learnability is in fact equivalent to explanatory learnability relative to the halting problem.


WTCS'12 Proceedings of the 2012 international conference on Theoretical Computer Science: computation, physics and beyond | 2012

Learning families of closed sets in matroids

Ziyuan Gao; Frank Stephan; Guohua Wu; Akihiro Yamamoto

In this paper it is studied for which oracles A and which types of A-r.e. matroids the class of all A-r.e. closed sets in the matroid is learnable by an unrelativised learner. The learning criteria considered comprise in particular criteria more general than behaviourally correct learning, namely behaviourally correct learning from recursive texts, partial learning and reliably partial learning. For various natural classes of matroids and learning criteria, characterisations of learnability are obtained.


Information & Computation | 2017

Distinguishing pattern languages with membership examples

Ziyuan Gao; Zeinab Mazadi; Regan Meloche; Hans Ulrich Simon; Sandra Zilles

Abstract This article determines two learning-theoretic combinatorial parameters, the teaching dimension and the recursive teaching dimension, for various families of pattern languages over alphabets of varying size. Our results and formal proofs are of relevance to recent studies in computational learning theory as well as in formal language theory. This is an expanded and corrected version of an earlier paper by Mazadi et al. (2014).


language and automata theory and applications | 2014

Distinguishing Pattern Languages with Membership Examples

Zeinab Mazadi; Ziyuan Gao; Sandra Zilles

This article determines two learning-theoretic combinatorial parameters, the teaching dimension and the recursive teaching dimension, for various families of pattern languages over alphabets of varying size. Our results and formal proofs are of relevance to recent studies in computational learning theory as well as in formal language theory.


Theoretical Computer Science | 2017

On the teaching complexity of linear sets

Ziyuan Gao; Hans Ulrich Simon; Sandra Zilles

Abstract Linear sets are the building blocks of semilinear sets, which are in turn closely connected to automata theory and formal languages. Prior work has investigated the learnability of linear sets and semilinear sets in three models – Valiants PAC-learning model, Golds learning in the limit model, and Angluins query learning model. This paper considers teacher–learner models of learning families of linear sets, in which a benevolent teacher presents a set of labelled examples to the learner. First, we study the classical teaching model, in which a teacher must successfully teach any consistent learner. Second, we will apply a generalisation of the recently introduced recursive teaching model to several infinite classes of linear sets, and show that thus the maximum sample complexity of teaching these classes can be drastically reduced compared to classical teaching. To this end, a major focus of the paper will be on determining two relevant teaching parameters, the teaching dimension and recursive teaching dimension, for various families of linear sets.


algorithmic learning theory | 2013

Partial Learning of Recursively Enumerable Languages

Ziyuan Gao; Frank Stephan; Sandra Zilles

This paper studies several typical learning criteria in the model of partial learning of r.e. sets in the recursion-theoretic framework of inductive inference. Its main contribution is a complete picture of how the criteria of confidence, consistency and conservativeness in partial learning of r.e. sets separate, also in relation to basic criteria of learning in the limit. Thus this paper constitutes a substantial extension to prior work on partial learning. Further highlights of this work are very fruitful characterisations of some of the inference criteria studied, leading to interesting consequences about the structural properties of the collection of classes learnable under these criteria. In particular a class is consistently partially learnable iff it is a subclass of a uniformly recursive family.


algorithmic learning theory | 2015

On the Teaching Complexity of Linear Sets

Ziyuan Gao; Hans Ulrich Simon; Sandra Zilles

Linear sets are the building blocks of semilinear sets, which are in turn closely connected to automata theory and formal languages. Prior work has investigated the learnability of linear sets and semilinear sets in three models --- Valiants PAC-learning model, Golds learning in the limit model, and Angluins query learning model. This paper considers a teacher-learner model of learning families of linear sets, whereby the learner is assumed to know all the smallest sets


Theoretical Computer Science | 2014

Confident and consistent partial learning of recursive functions

Ziyuan Gao; Frank Stephan


algorithmic learning theory | 2016

Classifying the Arithmetical Complexity of Teaching Models

Achilles A. Beros; Ziyuan Gao; Sandra Zilles

T_1,T_2,\ldots


Theoretical Computer Science | 2016

Partial learning of recursively enumerable languages

Ziyuan Gao; Frank Stephan; Sandra Zilles

Collaboration


Dive into the Ziyuan Gao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Stephan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Kirkpatrick

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guohua Wu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge