Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hideitsu Hino is active.

Publication


Featured researches published by Hideitsu Hino.


Neural Computation | 2010

A conditional entropy minimization criterion for dimensionality reduction and multiple kernel learning

Hideitsu Hino; Noboru Murata

Reducing the dimensionality of high-dimensional data without losing its essential information is an important task in information processing. When class labels of training data are available, Fisher discriminant analysis (FDA) has been widely used. However, the optimality of FDA is guaranteed only in a very restricted ideal circumstance, and it is often observed that FDA does not provide a good classification surface for many real problems. This letter treats the problem of supervised dimensionality reduction from the viewpoint of information theory and proposes a framework of dimensionality reduction based on class-conditional entropy minimization. The proposed linear dimensionality-reduction technique is validated both theoretically and experimentally. Then, through kernel Fisher discriminant analysis (KFDA), the multiple kernel learning problem is treated in the proposed framework, and a novel algorithm, which iteratively optimizes the parameters of the classification function and kernel combination coefficients, is proposed. The algorithm is experimentally shown to be comparable to or outperforms KFDA for large-scale benchmark data sets, and comparable to other multiple kernel learning techniques on the yeast protein function annotation task.


Neural Networks | 2015

Multi-frame image super resolution based on sparse coding

Toshiyuki Kato; Hideitsu Hino; Noboru Murata

An image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image, where correspondence between high- and low-resolution images are modeled by a certain degradation process. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method. The matching scores of the block matching are used to select a subset of low-resolution patches for reconstructing a high-resolution patch, that is, an adaptive selection of informative low-resolution images is realized. The proposed method is shown to perform comparable or superior to conventional super-resolution methods through experiments using various images.


Neural Networks | 2013

Information estimators for weighted observations

Hideitsu Hino; Noboru Murata

The Shannon information content is a valuable numerical characteristic of probability distributions. The problem of estimating the information content from an observed dataset is very important in the fields of statistics, information theory, and machine learning. The contribution of the present paper is in proposing information estimators, and showing some of their applications. When the given data are associated with weights, each datum contributes differently to the empirical average of statistics. The proposed estimators can deal with this kind of weighted data. Similar to other conventional methods, the proposed information estimator contains a parameter to be tuned, and is computationally expensive. To overcome these problems, the proposed estimator is further modified so that it is more computationally efficient and has no tuning parameter. The proposed methods are also extended so as to estimate the cross-entropy, entropy, and Kullback-Leibler divergence. Simple numerical experiments show that the information estimators work properly. Then, the estimators are applied to two specific problems, distribution-preserving data compression, and weight optimization for ensemble regression.


international conference on machine learning and applications | 2010

Multiple Kernel Learning by Conditional Entropy Minimization

Hideitsu Hino; Nima Reyhani; Noboru Murata

Kernel methods have been successfully used in many practical machine learning problems. Choosing a suitable kernel is left to the practitioner. A common way to an automatic selection of optimal kernels is to learn a linear combination of element kernels. In this paper, a novel framework of multiple kernel learning is proposed based on conditional entropy minimization criterion. For the proposed framework, three multiple kernel learning algorithms are derived. The algorithms are experimentally shown to be comparable to or outperform kernel Fisher discriminant analysis and other multiple kernel learning algorithms on benchmark data sets.


trust, security and privacy in computing and communications | 2015

Group Sparsity Tensor Factorization for De-anonymization of Mobility Traces

Takao Murakami; Atsunori Kanemura; Hideitsu Hino

The de-anonymization attack using personalized transition matrices is known as one of the most successful approaches to link anonymized traces with users. However, since many users disclose only a small amount of location information to the public in their daily lives, the amount of training data available to the adversary can be very small. The aim of this paper is to quantify the risk of de-anonymization in this realistic situation. To achieve this aim, we utilize the fact that spatial data can form group structure, and propose group sparsity tensor factorization to train the personalized transition matrices that capture spatial group structure from a small amount of training data. We apply our training method to the de-anonymization attack, and evaluate it using the Geolife dataset. The results show that the training method using tensor factorization outperforms the Maximum Likelihood estimation method, and is further improved by incorporating group sparsity regularization.


Neural Computation | 2012

Multiple kernel learning with gaussianity measures

Hideitsu Hino; Nima Reyhani; Noboru Murata

Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher discriminant analysis (FDA). FDA gives the Bayes optimal classification axis if the data distribution of each class in the feature space is a gaussian with a shared covariance structure. Based on this fact, an MKL framework based on the notion of gaussianity is proposed. As a concrete implementation, an empirical characteristic function is adopted to measure gaussianity in the feature space associated with a convex combination of kernel functions, and two MKL algorithms are derived. From experimental results on some data sets, we show that the proposed kernel learning followed by FDA offers strong classification power.


computer analysis of images and patterns | 2009

Calibration of Radially Symmetric Distortion by Fitting Principal Component

Hideitsu Hino; Yumi Usami; Jun Fujiki; Shotaro Akaho; Noboru Murata

To calibrate radially symmetric distortion of omnidirectional cameras such as fish-eye lenses, calibration parameters are usually estimated so that lines, which are supposed to be straight in the 3D real scene, are mapped to straight lines in the calibrated image. In this paper, this problem is treated as a fitting problem of the principal component in uncalibrated images, and an estimation procedure of calibration parameters is proposed based on the principal component analysis. Experimental results for synthetic data and real images are presented to demonstrate the performance of our calibration method.


Computational Statistics & Data Analysis | 2013

Entropy-based sliced inverse regression

Hideitsu Hino; Keigo Wakayama; Noboru Murata

The importance of dimension reduction has been increasing according to the growth of the size of available data in many fields. An appropriate dimension reduction method of raw data helps to reduce computational time and to expose the intrinsic structure of complex data. Sliced inverse regression is a well-known dimension reduction method for regression, which assumes an elliptical distribution for the explanatory variable, and ingeniously reduces the problem of dimension reduction to a simple eigenvalue problem. Sliced inverse regression is based on the strong assumptions on the data distribution and the form of regression function, and there are a number of methods to relax or remove these assumptions to extend the applicability of the inverse regression method. However, each method is known to have its drawbacks either theoretically or empirically. To alleviate drawbacks in the existing methods, a dimension reduction method for regression based on the notion of conditional entropy minimization is proposed. Using entropy as a measure of dispersion of data, a low dimensional subspace is estimated without assuming any specific distribution nor any regression function. The proposed method is shown to perform comparable or superior to the conventional methods through experiments using artificial and real-world datasets.


Neural Computation | 2010

A grouped ranking model for item preference parameter

Hideitsu Hino; Yu Fujimoto; Noboru Murata

Given a set of rating data for a set of items, determining preference levels of items is a matter of importance. Various probability models have been proposed to solve this task. One such model is the Plackett-Luce model, which parameterizes the preference level of each item by a real value. In this letter, the Plackett-Luce model is generalized to cope with grouped ranking observations such as movie or restaurant ratings. Since it is difficult to maximize the likelihood of the proposed model directly, a feasible approximation is derived, and the em algorithm is adopted to find the model parameter by maximizing the approximate likelihood which is easily evaluated. The proposed model is extended to a mixture model, and two applications are proposed. To show the effectiveness of the proposed model, numerical experiments with real-world data are carried out.


npj Computational Materials | 2018

Adaptive design of an X-ray magnetic circular dichroism spectroscopy experiment with Gaussian process modelling

Tetsuro Ueno; Hideitsu Hino; Ai Hashimoto; Yasuo Takeichi; Masahiro Sawada; Kanta Ono

Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.Spectroscopy: Machine learning for efficient measurementsMachine learning methods can make spectroscopy more time- and cost-efficient. Spectroscopy is a powerful experimental technique for characterising the properties of materials, but measurement times can often be long resulting in high running costs. There are many different types of spectroscopy, using light at different wavelengths. A team of Japanese researchers led by Tetsuro Ueno and Kanta Ono now show that machine learning methods can be used to reduce the number of data points required to determine the magnetic moments in a material using x-ray magnetic circular dichroism spectroscopy. This method, which repeatedly adapts the experimental sampling based on model predictions, not only reduces the time and cost for this type of spectroscopy, but it should be applicable to others.

Collaboration


Dive into the Hideitsu Hino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shotaro Akaho

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Fujiki

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge