Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuya Kaneda is active.

Publication


Featured researches published by Yuya Kaneda.


systems, man and cybernetics | 2012

Decision boundary learning based on an improved PSO algorithm

Kyohei Watarai; Qiangfu Zhao; Yuya Kaneda

The goal of this research is to design a multimedia analyzer (MA) that can be embedded in portable devices. This MA can recognize different multimedia (e.g. text and image) patterns and help the user to analyze the multimedia contents more efficiently. To realize the MA in an environment with limited computing resource, we propose a new concept called decision boundary learning (DBL). The basic idea is to generate training patterns close to the decision boundary (DB), so that a neural network (NN) with high generalization ability can be obtained. In this paper, the DB is first obtained approximately using a support vector machine (SVM), and the desired training patterns are found using an improved particle swarm optimization (PSO) algorithm. Experimental results show that the NNs so obtained are comparable in performance to the SVMs although the former are much more compact.


systems, man and cybernetics | 2012

An study on the effect of learning parameters for inducing compact SVM

Yuya Kaneda; Qiangfu Zhao; Kyohei Watarai

Support vector machine (SVM) is one of the best machine learning models that offers high accuracy both for recognition and for regression. One drawback of using SVM is that the system implementation cost is usually proportional to the number of training data and the dimension of the feature space. Therefore, it is difficult to use SVM in mobile devices such as IC cards and smart phones. In our study, we have tried to solve the problem using dimensionality reduction (DR). Since implementation cost of DR should also be considered in a restricted computing environment, we adopted a simple centroid based DR method. In this paper, we investigate the effect of learning parameters on the performance of the system, and provide some insights on obtaining compact SVMs.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2013

A Study on the Effect of Learning Parameters for Inducing Compact SVM

Yuya Kaneda; Qiangfu Zhao; Yong Liu; Neil Y. Yen

Support vector machine (SVM) is one of the best machine learning models that offers high accuracy both for recognition and for regression. One drawback of using SVM is that the system implementation cost is usually proportional to the number of training data and the dimension of the feature space. Therefore, it is difficult to use SVM in mobile devices such as IC cards and smart phones. In our study, we have tried to solve the problem using dimensionality reduction (DR). Since implementation cost of DR should also be considered in a restricted computing environment, we adopted a simple centroid based DR method. In this paper, we investigate the effect of learning parameters on the performance of the system, and provide some insights on obtaining compact SVMs.


systems, man and cybernetics | 2016

DBM vs ELM: A study on effective training of compact MLP

Masato Hashimoto; Yuya Kaneda; Qiangfu Zhao; Yong Liu

We compare the performance of multilayer perceptrons (MLPs) obtained using back propagation (BP), decision boundary making (DBM) algorithm and extreme learning machine (ELM), and investigate better method for developing aware agents (A-agent) that are suitable for implementation in portable/wearable computing devices (P/WCD). The DBM has been proposed by us for inducing compact and high performance learning models that are suitable for implementation in P/WCD. The basic idea of the DBM is to generate data to fit the decision boundary (DB) of a high performance model, and then induce a compact model based on the generated data. In our study, support vector machine (SVM) is used as the high performance model, and a single hidden layer MLP is used as the compact model for the DBM algorithm. ELM is paid attention as new learning method for neural networks in recent years. It is known that hidden layer is not to be tuned and available fast training compared to traditional gradient-based learning methods. Experimental results show that the performance of DBM is the highest in three training methods when the number of hidden neurons is small for all databases used in the experiment. This means that the accuracy of DBM converged to high score, when the number of hidden neuron is small. Therefore, we found that DBM is the best algorithm for developing compact and high performance A-agents.


systems, man and cybernetics | 2016

Guide data generation for on-line learning of DBM-initialized MLP

Yuya Kaneda; Qiangfu Zhao; Yong Liu

In this paper, we propose a method for generating guide data, and investigate its efficiency and efficacy for on-line learning with guide data. On-line learning in this research updates a learning model initialized by the decision boundary making algorithm proposed by us in our earlier study. The problem is that, if the guide data are not properly generated, on-line learning may require high computational cost in terms of time, and the learning process may not converge to good result. To solve this problem, we propose to use k-means to cluster all candidates of guide data, and use one datum from each cluster as the guide datum. We conducted experiments on several public databases, using different settings, and confirmed the performance of the proposed method. Specifically, if k=5, we can obtain good models with low computational cost through on-line learning.


ieee symposium series on computational intelligence | 2016

On-Line training with guide data: Shall we select the guide data randomly or based on cluster centers?

Yuya Kaneda; Qiangfu Zhao; Yong Liu

To retrain an existing multilayer perceptron (MLP) on-line using newly observed data, it is necessary to incorporate the new information while preserving the performance of the network. This is known as the “plasticitystability” problem. For this purpose, we proposed an algorithm for on-line training with guide data (OLTA-GD). OLTA-GD is good for implementation in portable/wearable computing devices (P/WCDs) because of its low computational cost, and can make us more independent of the internet. Results obtained so far show that, in most cases, OLTA-GD can improve an MLP steadily. One question in using OLTA-GD is how we can select the guide data more efficiently. In this paper, we investigate two methods for guide data selection. The first one is to select the guide data randomly from a candidate data set G, and the other is to cluster G first, and select the guide data from G based on the cluster centers. Results show that the two methods do not have significant difference in the sense that both of them can preserve the performance of the MLP well. However, if we consider the risk of “instantaneous performance degradation”, random selection is not recommended. In other words, cluster center-based selection can provide more reliable results for the user during on-line training.


ieee symposium series on computational intelligence | 2016

An ELM-based privacy preserving protocol for cloud systems

Masato Hashimoto; Yuya Kaneda; Qiangfu Zhao

In this paper, we propose a privacy preserving protocol for cloud system utilization based on extreme learning machine (ELM). The purpose is to implement aware agents (A-agents) on portable/wearable computing devices (P/WCD). The proposed protocol is useful to reduce the calculation cost on the P/WCD. The basic idea of the protocol is to divide an ELM-based A-agent into two parts, one containing the weights of hidden layer(s) and the other containing the weights of the output layer. The former is implemented in the remote server and the latter is implemented in the P/WCD. In addition, the input data are first encrypted in the P/WCD using transposition cipher, and then sent to the server. Because the server can only “see” random weights and encrypted data, the user intention and privacy can be protected. In addition, since part of the computations is executed on the server, the cost for implementing A-agents in the P/WCD can be reduced. Experimental results on several public databases show that the proposed protocol is useful if the dimension of the input data is high.


ieee symposium series on computational intelligence | 2016

A new protocol for on-line user identification based on hand-writing characters

Ryota Hanyu; Qiangfu Zhao; Yuya Kaneda

Biometric authentication (BA) is becoming more and more popular. Usually, we expect that BA can make various service systems more secure, but in fact it can be more dangerous. For example, fingerprint is one of the popular biometrics for authentication. We say it is dangerous because we cannot change our fingerprints even if they are collected and duplicated by some malicious third parties. This kind of “life-long” biometrics, once they are stolen, can never be used as an authentication factor in the future. To solve the problem, we may use “changeable” biometrics. Examples include face, voice, and hand-writing characters. In this study, we use hand-writing characters. Hand-writing characters can change naturally in the aging process, they can also be changed intentionally through training. This paper investigates the feasibility of on-line user identification using hand-writing non-alphanumeric characters. Our main purpose is to develop some core technologies that can improve the security of service systems in some Asia countries that use Chinese characters.


soft computing and pattern recognition | 2015

Strategies for determining effective step size of the backpropagation algorithm for on-line learning

Yuya Kaneda; Qiangfu Zhao; Yong Liu; Yan Pei

In this paper, we investigate proper strategies for determining the step size of the backpropagation (BP) algorithm for on-line learning. It is known that for off-line learning, the step size can be determined adaptively during learning. For on-line learning, since the same data may never appear again, we cannot use the same strategy proposed for off-line learning. If we do not update the neural network with a proper step size for on-line learning, the performance of the network may not be improved steadily. Here, we investigate four strategies for updating the step size. They are (1) constant, (2) random, (3) linearly decreasing, and (4) inversely proportional, respectively. The first strategy uses a constant step size during learning, the second strategy uses a random step size, the third strategy decreases the step size linearly, and the fourth strategy updates the step size inversely proportional to time. Experimental results show that, the third and the fourth strategies are more effective. In addition, compared with the third strategy, the fourth one is more stable, and usually can improve the performance steadily.


systems, man and cybernetics | 2014

Study on the Effect of Learning Parameters on Decision Boundary Making Algorithm

Yuya Kaneda; Yan Pei; Qiangfu Zhao; Yong Liu

The purpose of our study is to induce compact and high performance machine learning models. In our earlier study, we proposed a decision boundary making (DBM) algorithm. The main philosophy of the DBM algorithm is to reconstruct a high performance model with much smaller cost. In our study, we use support vector machine as a high performance model, and a multilayer neural network, i.e., multilayer perceptron (MLP), as the small model. Experimental results obtained so far show that high performance and compact MLPs can be obtained using DBM. However, there are several parameters of DBM that need to be adjusted appropriately in order to achieve better performance. In this paper, we investigate the effect of parameter N, which is the number of newly generated data, on the performance of obtained MLPs. We discuss the issue that how many new data we should generate to obtain a better performance of DBM. We also investigate the effect of outliers on the performance of the obtained MLPs. Outliers are generally known to be harmful for pattern recognition. Our experimental results show, however, that for some databases, outliers can be useful for obtaining high performance MLPs.

Collaboration


Dive into the Yuya Kaneda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge