Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hidemitsu Ogawa is active.

Publication


Featured researches published by Hidemitsu Ogawa.


Neural Computation | 2001

Subspace Information Criterion for Model Selection

Masashi Sugiyama; Hidemitsu Ogawa

The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. In this article, we propose a new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallowss CL. It is assumed that the learning target function belongs to a specified functional Hilbert space and the generalization error is defined as the Hilbert space squared norm of the difference between the learning result function and target function. SIC gives an unbiased estimate of the generalization error so defined. SIC assumes the availability of an unbiased estimate of the target function and the noise covariance matrix, which are generally unknown. A practical calculation method of SIC for least-mean-squares learning is provided under the assumption that the dimension of the Hilbert space is less than the number of training examples. Finally, computer simulations in two examples show that SIC works well even when the number of training examples is small.


IEEE Transactions on Signal Processing | 1996

Relative Karhunen-Loeve transform

Yukihiko Yamashita; Hidemitsu Ogawa

The Karhunen-Loeve transform (KLT) provides the best approximation for a stochastic signal under the condition that its rank is fixed. It has been successfully used for data compression in communication. However, since the KLT does not consider noise, its ability to suppress noise is very poor. For the optimum linear data compression in the presence of noise, we propose the concept of a relative Karhunen-Loeve transform (RKLT). It minimizes the sum of the mean squared error between the original signal and its approximation and the mean squared error caused by a noise under the condition that its rank is fixed. We also provide another type of RKLT. It minimizes the same sum under the condition that its rank is not greater than a fixed integer. Since the former type of RKLT does not always exist, we provide a necessary and sufficient condition under which it does exist. We also provide their general forms. The advantage of RKLTs is illustrated through computer simulations.


Applied Optics | 2002

Fast surface profiler by white-light interferometry by use of a new algorithm based on sampling theory

Akira Hirabayashi; Hidemitsu Ogawa; Katsuichi Kitagawa

We propose a fast surface-profiling algorithm based on white-light interferometry by use of sampling theory. We first provide a generalized sampling theorem that reconstructs the squared-envelope function of the white-light interferogram from sampled values of the interferogram and then propose the new algorithm based on the theorem. The algorithm extends the sampling interval to 1.425 microm when an optical filter with a center wavelength of 600 nm and a bandwidth of 60 nm is used. The sampling interval is 6-14 times wider than those used in conventional systems. The algorithm has been installed in a commercial system that achieved the worlds fastest scanning speed of 80 microm/s. The height resolution of the system is of the order of 10 nm for a measurement range of greater than 100 microm.


Neural Networks | 2002

Optimal design of regularization term and regularization parameter by subspace information criterion

Masashi Sugiyama; Hidemitsu Ogawa

The problem of designing the regularization term and regularization parameter for linear regression models is discussed. Previously, we derived an approximation to the generalization error called the subspace information criterion (SIC), which is an unbiased estimator of the generalization error with finite samples under certain conditions. In this paper, we apply SIC to regularization learning and use it for: (a) choosing the optimal regularization term and regularization parameter from the given candidates; (b) obtaining the closed form of the optimal regularization parameter for a fixed regularization term. The effectiveness of SIC is demonstrated through computer simulations with artificial and real data.


Neurocomputing | 1999

RKHS-based functional analysis for exact incremental learning

Sethu Vijayakumar; Hidemitsu Ogawa

Abstract We investigate the problem of incremental learning in artificial neural networks by viewing it as a sequential function approximation problem. A framework for discussing the generalization ability of a trained network in the original function space using tools of functional analysis based on reproducing kernel Hilbert spaces (RKHS) is introduced. Using this framework, we devise a method of carrying out optimal incremental learning with respect to the entire set of training data by employing the results derived at the previous stage of learning and incorporating the newly available training data effectively. Most importantly, the incrementally learned function has the same (optimal) generalization ability as would have been achieved by using batch learning on the entire set of training data, hence, referred to as exact learning. This ensures that both the learning operator and the learned function can be computed using an online incremental scheme. Finally, we also provide a simplified closed-form relationship between the learned functions before and after the incorporation of new data for various optimization criteria, opening avenues for work into selection of optimal training set. We also show that learning under this kind of framework is inherently well suited for applying novel model selection strategies and introducing bias and a priori knowledge in a more systematic way. Moreover, it provides a useful hint in performing kernel-based approximations, of which the regularization and SVM networks are special cases, in an online setting.


Neural Computation | 2000

Incremental Active Learning for Optimal Generalization

Masashi Sugiyama; Hidemitsu Ogawa

The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.


Applied Optics | 2006

Single-shot surface profiling by local model fitting

Masashi Sugiyama; Hidemitsu Ogawa; Katsuichi Kitagawa; Kazuyoshi Suzuki

A new surface profiling algorithm called the local model fitting (LMF) method is proposed. LMF is a single-shot method that employs only a single image, so it is fast and robust against vibration. LMF does not require a conventional assumption of smoothness of the target surface in a band-limit sense, but we instead assume that the target surface is locally constant. This enables us to recover sharp edges on the surface. LMF employs only local image data, so objects covered with heterogeneous materials can also be measured. The LMF algorithm is simple to implement and is efficient in computation. Experimental results showed that the proposed LMF method works very well.


Signal Processing | 2002

A unified method for optimizing linear image restoration filters

Masashi Sugiyama; Hidemitsu Ogawa

Image restoration from degraded images lies at the foundation of image processing, pattern recognition, and computer vision, so it has been extensively studied. A large number of image restoration filters have been devised so far. It is known that a certain filter works excellently for a certain type of original image or degradation. However, the same filter may not be suitable for other images, so the selection of filters is exceedingly important in practice. Moreover, if a filter includes adjustable parameters such as the regularization parameter or threshold, its restoration performance relies heavily on the choice of the parameter values. In this paper, we therefore discuss the problem of optimizing the filter type and parameter values. Our method is based on the subspace information criterion (SIC), which is an unbiased estimator of the expected squared error between the restored and original images. Since SIC is applicable to any linear filters, one can optimize the filter type and parameter values in a consistent fashion. Our emphasis in this article is laid on the practical concerns of SIC, such as the noise variance estimation, computational issues, and comparison with existing methods. Specifically, We derive an analytic form of the optimal parameter values for the moving-average filter, which will greatly reduce the computational cost. Experiments with the regularization filter show that SIC is comparable to existing methods in the small degradation case, and SIC tends to outperform existing methods in the severe degradation case.


Neural Networks | 2001

Incremental projection learning for optimal generalization

Masashi Sugiyama; Hidemitsu Ogawa

In many practical situations in neural network learning, it is often expected to further improve the generalization capability after the learning process has been completed. One of the common approaches is to add training data to the neural network. In view of the learning methods of human beings, it seems natural to build posterior learning results upon prior results, which is generally referred to as incremental learning. Many incremental learning methods have been devised so far. However, they provide poor generalization capability compared with batch learning methods. In this paper, a method of incremental projection learning in the presence of noise is presented, which provides exactly the same learning result as that obtained by batch projection learning. The effectiveness of the presented method is demonstrated through computer simulations.


International Symposium on Optical Science and Technology | 2001

Fast surface profiler by white-light interferometry using a new algorithm, the SEST algorithm

Akira Hirabayashi; Hidemitsu Ogawa; Katsuichi Kitagawa

We devise a fast algorithm for surface profiling by white- light interferometry. It is named the SEST algorithm after Square Envelope function estimation by Sampling Theory. Conventional methods for surface profiling by white-light interferometry based their foundation on digital signal processing technique, which is used as an approximation of continuous signal processing. Hence, these methods require narrow sampling intervals to achieve good approximation accuracy. In this paper, we introduce a totally novel approach using sampling theory. That is, we provide a generalized sampling theorem that reconstructs a square envelope function of a white-light interference fringe from sampled values of the interference fringe. A sampling interval in the SEST algorithm is 6-14 times wider than those of conventional methods when an optical filter of the center wavelength 600 nm and the bandwidth 60 nm is used. The SEST algorithm has been installed in a commercial system which achieved the worlds fastest scanning speed of 42.75 micrometers /s. The height resolution of the system lies in the order of 10 nm for a measurement range of greater than 100 micrometers .

Collaboration


Dive into the Hidemitsu Ogawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yukihiko Yamashita

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuyoshi Suzuki

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taizo Iijima

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shidong Li

San Francisco State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge