Kohei Hatano
Kyushu University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kohei Hatano.
discovery science | 2007
Kazuyuki Narisawa; Hideo Bannai; Kohei Hatano; Masayuki Takeda
We propose an unsupervised method for detecting spam documents from a given set of documents, based on equivalence relations on strings. We give three measures for quantifying the alienness (i.e. how different they are from others) of substrings within the documents. A document is then classified as spam if it contains a substring that is in an equivalence class with a high degree of alienness. The proposed method is unsupervised, language independent, and scalable. Computational experiments conducted on data collected from Japanese web forums show that the method successfully discovers spams.
algorithmic learning theory | 2012
Daiki Suehiro; Kohei Hatano; Shuji Kijima; Eiji Takimoto; Kiyohito Nagano
We consider an online prediction problem of combinatorial concepts where each combinatorial concept is represented as a vertex of a polyhedron described by a submodular function (base polyhedron). In general, there are exponentially many vertices in the base polyhedron. We propose polynomial time algorithms with regret bounds. In particular, for cardinality-based submodular functions, we give O(n2)-time algorithms.
Neural Computation | 2009
Liwei Wang; Masashi Sugiyama; Cheng Yang; Kohei Hatano; Jufu Feng
We study the problem of classification when only a dissimilarity function between objects is accessible. That is, data samples are represented not by feature vectors but in terms of their pairwise dissimilarities. We establish sufficient conditions for dissimilarity functions to allow building accurate classifiers. The theory immediately suggests a learning paradigm: construct an ensemble of simple classifiers, each depending on a pair of examples; then find a convex combination of them to achieve a large margin. We next develop a practical algorithm referred to as dissimilarity-based boosting (DBoost) for learning with dissimilarity functions under theoretical guidance. Experiments on a variety of databases demonstrate that the DBoost algorithm is promising for several dissimilarity measures widely used in practice.
algorithmic learning theory | 2014
Nir Ailon; Kohei Hatano; Eiji Takimoto
The permutahedron is the convex polytope with vertex set consisting of the vectors (π(1),…, π(n)) for all permutations (bijections) π over {1,…, n}. We study a bandit game in which, at each step t, an adversary chooses a hidden weight weight vector s t , a player chooses a vertex π t of the permutahedron and suffers an observed instantaneous loss of \(\sum_{i=1}^n\pi_t(i) s_t(i)\).
discovery science | 2009
Kohei Hatano; Eiji Takimoto
We propose a new boosting algorithm based on a linear programming formulation. Our algorithm can take advantage of the sparsity of the solution of the underlying optimization problem. In preliminary experiments, our algorithm outperforms a state-of-the-art LP solver and LPBoost especially when the solution is given by a small set of relevant hypotheses and support vectors.
european conference on machine learning | 2014
Yao Ma; Tingting Zhao; Kohei Hatano; Masashi Sugiyama
We consider the learning problem under an online Markov decision process (MDP), which is aimed at learning the time-dependent decision-making policy of an agent that minimizes the regret — the difference from the best fixed policy. The difficulty of online MDP learning is that the reward function changes over time. In this paper, we show that a simple online policy gradient algorithm achieves regret O(√T) for T steps under a certain concavity assumption and O(logT) under a strong concavity assumption. To the best of our knowledge, this is the first work to give an online MDP algorithm that can handle continuous state, action, and parameter spaces with guarantee. We also illustrate the behavior of the online policy gradient method through experiments.
algorithmic learning theory | 2013
Takahiro Fujita; Kohei Hatano; Eiji Takimoto
We consider online prediction problems of combinatorial concepts. Examples of such concepts include s-t paths, permutations, truth assignments, set covers, and so on. The goal of the online prediction algorithm is to compete with the best fixed combinatorial concept in hindsight. A generic approach to this problem is to design an online prediction algorithm using the corresponding offline (approximation) algorithm as an oracle. The current state-of-the art method, however, is not efficient enough. In this paper we propose a more efficient online prediction algorithm when the offline approximation algorithm has a guarantee of the integrality gap.
algorithmic learning theory | 2006
Kohei Hatano
Smooth boosting algorithms are variants of boosting methods which handle only smooth distributions on the data. They are proved to be noise-tolerant and can be used in the boosting by filtering scheme, which is suitable for learning over huge data. However, current smooth boosting algorithms have rooms for improvements: Among non-smooth boosting algorithms, real AdaBoost or InfoBoost, can perform more efficiently than typical boosting algorithms by using an information-based criterion for choosing hypotheses. In this paper, we propose a new smooth boosting algorithm with another information-based criterion based on Gini index. we show that it inherits the advantages of two approaches, smooth boosting and information-based approaches.
international conference on distributed, ambient, and pervasive interactions | 2018
Kohei Hatano
Computer-based support for learning of elderly people is now considered as an important issue in the super-aged society. Extra cares are needed for elderly people’s learning compared to younger people, since they might have difficulty in using computers, reduced cognitive ability and other physical problems which make them less motivated. Key components of a better learning support system are sensing the contexts surrounding elderly people and providing appropriate feedbacks to them. In this paper, we review some existing techniques of the contextual bandit framework in the machine learning literature, which could be potentially useful for online decision making scenarios given contexts. We also discuss issues and challenges to apply the framework.
algorithmic learning theory | 2011
Daiki Suehiro; Kohei Hatano; Eiji Takimoto
Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p + n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p + n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p + n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.