Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junpei Komiyama is active.

Publication


Featured researches published by Junpei Komiyama.


workshop on internet and network economics | 2014

Time-Decaying Bandits for Non-stationary Systems

Junpei Komiyama; Tao Qin

Contents displayed on web portals (e.g., news articles at Yahoo.com) are usually adaptively selected from a dynamic set of candidate items, and the attractiveness of each item decays over time. The goal of those websites is to maximize the engagement of users (usually measured by their clicks) on the selected items. We formulate this kind of applications as a new variant of bandit problems where new arms are dynamically added into the candidate set and the expected reward of each arm decays as the round proceeds. For this new problem, a direct application of the algorithms designed for stochastic MAB (e.g., UCB) will lead to over-estimation of the rewards of old arms, and thus cause a misidentification of the optimal arm. To tackle this challenge, we propose a new algorithm that can adaptively estimate the temporal dynamics in the rewards of the arms, and effectively identify the best arm at a given time point on this basis. When the temporal dynamics are represented by a set of features, the proposed algorithm is able to enjoy a sub-linear regret. Our experiments verify the effectiveness of the proposed algorithm.


european conference on machine learning | 2014

Robust distributed training of linear classifiers based on divergence minimization principle

Junpei Komiyama; Hidekazu Oiwa; Hiroshi Nakagawa

We study a distributed training of a linear classifier in which the data is separated into many shards and each worker only has access to its own shard. The goal of this distributed training is to utilize the data of all shards to obtain a well-performing linear classifier. The iterative parameter mixture (IPM) framework (Mann et al., 2009) is a state-of-the-art distributed learning framework that has a strong theoretical guarantee when the data is clean. However, contamination on shards, which sometimes arises in real world environments, largely deteriorates the performances of the distributed training. To remedy the negative effect of the contamination, we propose a divergence minimization principle for the weight determination in IPM. From this principle, we can naturally derive the Beta-IPM scheme, which leverages the power of robust estimation based on the beta divergence. A mistake/loss bound analysis indicates the advantage of our Beta-IPM in contaminated environments. Experiments with various datasets revealed that, even when 80% of the shards are contaminated, Beta-IPM can suppress the influence of the contamination.


international conference on machine learning | 2015

Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays

Junpei Komiyama; Junya Honda; Hiroshi Nakagawa


conference on learning theory | 2015

Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem

Junpei Komiyama; Junya Honda; Hisashi Kashima; Hiroshi Nakagawa


asian conference on machine learning | 2013

Multi-armed Bandit Problem with Lock-up Periods

Junpei Komiyama; Issei Sato; Hiroshi Nakagawa


international conference on machine learning | 2016

Copeland dueling bandit problem: regret lower bound, optimal algorithm, and computationally efficient algorithm

Junpei Komiyama; Junya Honda; Hiroshi Nakagawa


knowledge discovery and data mining | 2017

Statistical Emerging Pattern Mining with Multiple Testing Correction

Junpei Komiyama; Masakazu Ishihata; Hiroki Arimura; Takashi Nishibayashi; Shin-ichi Minato


neural information processing systems | 2015

Regret lower bound and optimal algorithm in finite stochastic partial monitoring

Junpei Komiyama; Junya Honda; Hiroshi Nakagawa


international conference on machine learning | 2018

Nonconvex Optimization for Regression with Fairness Constraints.

Junpei Komiyama; Akiko Takeda; Junya Honda; Hajime Shimao


international conference on machine learning | 2018

Nonconvex Optimization for Fair Regression

Junpei Komiyama; Akiko Takeda; Junya Honda; Hajime Shimao

Collaboration


Dive into the Junpei Komiyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge