Hiroki Ishizaka
Kyushu Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hiroki Ishizaka.
algorithmic learning theory | 1995
Hiroki Arimura; Hiroki Ishizaka; Takeshi Shinohara
This paper characterizes the polynomial time learnability of TP k , the class of collections of at most k first-order terms. A collection in TPA k defines the union of the languages defined by each first-order terms in the set. Unfortunately, the class TP k not polynomial time learnable in most of learning frameworks under standard assumptions in computational complexity theory. To overcome this computational hardness, we relax the learning problem by allowing a learning algorithm to make membership queries. We present a polynomial time algorithm that exactly learns every concept in TP k using O(kn) equivalence and O(k2n · max{k, n}) membership queries, where n is the size of longest counterexample given so far. In the proof, we use a technique of replacing each restricted subset query by several membership queries under some condition on a set of function symbols. As corollaries, we obtain the polynomial time PAC-learnability and the polynomial time predictability of TP k when membership queries are available. We also show a lower bound Ω(kn) of the number of queries necessary to learn TP k using both types of queries. Further, we show that neither types of queries can be eliminated to achieve efficient learning of TP k . Finally, we apply our results in learning of a class of restricted logic programs, called unit clause programs.
conference on learning theory | 1989
Hiroki Ishizaka
This paper is concerned with the problem of learning simple deterministic languages. The algorithm described in this paper is essentially based on the theory of model inference given by Shapiro. In our setting, however, nonterminal membership queries, for nonterminals except the start symbol, are not used. Instead of them, extended equivalence queries are used. Nonterminals that are necessary for a correct grammar and their meanings are introduced automatically.
discovery science | 1999
Takeshi Shinohara; Jianping Chen; Hiroki Ishizaka
Approximate retrieval of multi-dimensional data, such as documents, digital images, and audio clips, is a method to get objects within some dissimilarity from a given object. We assume a metric space containing objects, where distance is used to measure dissimilarity. In Euclidean metric spaces, approximate retrieval is easily and efficiently realized by a spatial indexing/access method R-tree. First, we consider objects in discrete L1 (or Manhattan distance) metric space, and present embedding method into Euclidean space for them. Then, we propose a projection mapping H-Map to reduce dimensionality of multi-dimensional data, which can be applied to any metric space such as L1 or L? metric space, as well as Euclidean space. H-Map does not require coordinates of data unlike K-L transformation. H-Map has an advantage in using spatial indexing such as R-tree because it is a continuous mapping from a metric space to an L? metric space, where a hyper-sphere is a hyper-cube in the usual sense. Finally we show that the distance function itself, which is simpler than H-Map, can be used as a dimension reduction mapping for any metric space.
algorithmic learning theory | 1996
Noriko Sugimoto; Kouichi Hirata; Hiroki Ishizaka
Learning a translation based on a dictionary is to extract a binary relation over strings from given examples based on information supplied by the dictionary. In this paper, we introduce a restricted elementary formal system called a regular TEFS to formalize translations and dictionaries. Then, we propose a learning algorithm that identifies a translation defined by a regular TEFS from positive and negative examples. The main advantage of the learning algorithm is constructive, that is, the produced hypothesis reflects the examples directly. The learning algorithm generates the most specific clauses from examples by referring to a dictionary, generalizes these clauses, and then removes too strong clauses from them. As a result, the algorithm can learn translations over context-free languages.
Annals of Mathematics and Artificial Intelligence | 1998
Hiroki Ishizaka; Hiroki Arimura; Takeshi Shinohara
AbstractThis paper is concerned with the problem of finding a hypothesis in
conference on learning theory | 1992
Hiroki Arimura; Hiroki Ishizaka; Takeshi Shinohara
New Generation Computing | 2000
Takeshi Shinohara; Jiyuan An; Hiroki Ishizaka
\mathcal{T}{\kern 1pt} \mathcal{P}^2
algorithmic learning theory | 1992
Hiroki Ishizaka; Hiroki Arimura; Takeshi Shinohara
discovery science | 2001
Noriko Sugimoto; Hiroki Ishizaka; Takeshi Shinohara
consistent with given positive and negative examples. The hypothesis class
discovery science | 1998
Takeshi Shinohara; Jiyuan An; Hiroki Ishizaka