Wataru Ohyama
Mie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wataru Ohyama.
international conference on pattern recognition | 2000
Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura; Shinji Tsuruoka; Kiyotsugu Sekioka
This study proposes a new automatic detection method based on ternary thresholding method for echocardiograms. Two thresholds are determined by the discriminant analysis for the gray level histogram so that the input image is segmented into three regions: cardiac cavity, near epicardium, and the rest. Then the input echocardiogram is binarized with the lower threshold (between black and gray) to detect the cardiac cavity. The binary images are contracted n times to remove small regions and to disconnect the region of cardiac cavity from the other false regions. Among the obtained regions which corresponds to the cardiac cavity is selected and dilated 2n times to create a mask which restricts the region of the second thresholding operation. The masked image of each frame is binarized with another threshold determined by the discriminant analysis in the restricted area. Results of the evaluation test showed that the accuracy of the extracted contours was favorably compared with the accuracy of manually traced contours.The purpose of this edge detection and segmentation method for two-dimensional echocardiogram is to present the procedures to detect and segment an image from Two-dimensional echocardiogram and to generate a scanline that can be used to detect the distance between two endocardiums which is useful to analyze heart disease. This method applies image processing and computer graphic algorithms which were divided into 3 steps. Firstly, we used image improvement algorithms of noise suppression, histogram, brightness adjustment, threshold and median filtering. Then, edge detection algorithm with sobel compass gradient mask was applied to show the edge of endocardium border. Finally, segmentation and some computer graphics algorithms were used to identify and generate contour line of the endocardium border. Later in the study, Pearson correlation coefficient was used to evaluate performance of this method compared with that of manual track. The average correlation computes from this method is 0.9 which shows a good result because 0.9 is very close to 1. However, some part of contour line has a big error value. The unexpected result from incomplete of endocardium border came from color value of some part of border very close to background or noise color value. This problem occurred in first step can be solved by carefully collecting in collection process.
document engineering | 2003
Guowei Zu; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
In this paper, we describe a comparative study on techniques of feature transformation and classification to improve the accuracy of automatic text classification. The normalization to the relative word frequency, the principal component analysis (K-L transformation) and the power transformation were applied to the feature vectors, which were classified by the Euclidean distance, the linear discriminant function, the projection distance, the modified projection distance and the SVM.
international conference on document analysis and recognition | 2001
Tetsushi Wakabayashi; Meng Shi; Wataru Ohyama; Fumitaka Kimura
This paper proposes a new corrective learning algorithm and evaluates the performance by a handwritten numeral recognition test. The algorithm generates a mirror image of a pattern that belongs to one class of a pair of confusing classes and utilizes it as a learning pattern of the other class. This paper also studies how to extract confusing patterns within a certain margin of a decision boundary to generate enough mirror images, and how to perform an effective mirror image compensation to increase the margin. Recognition accuracies of the minimum distance classifier and the projection distance method were improved from 93.17% to 98.38% and from 99.11% to 99.41% respectively in the recognition test for handwritten numeral database IPTP CD-ROM1.
Lecture Notes in Computer Science | 2004
Xuexian Han; Guowei Zu; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
In this paper, we describe a comparative study on techniques of feature transformation and classification to improve the accuracy of automatic text classification. The normalization to the relative word frequency, the principal component analysis (K-L transformation) and the power transformation were applied to the feature vectors, which were classified by the Euclidean distance, the linear discriminant function, the projection distance, the modified projection distance and the SVM. In order to improve the classification accuracy, the multi-classifier combination by majority vote was employed.
international conference on frontiers in handwriting recognition | 2004
Yimei Ding; Wataru Ohyama; Fumitaka Kimura; Malayappan Shridhar
This paper describes three methods for local slant estimation, which are simple iterative method, high-speed iterative method, and 8-directional chain code method. The experimental results show that the proposed methods can estimate and correct local slant more accurately than the average slant correction.
international conference on document analysis and recognition | 2011
Ryo Narita; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
In this paper, we propose a new method for three dimensional rotation-free recognition of characters in scene. In the proposed method, we employ the Modified Quadratic Discriminant Function (MQDF) classifier trained with samples generated by three-dimensional rotation process in a computer. We assume that when recognizing individual characters, considering three-dimensional rotation can approximately handle the recognition of perspectively distorted characters. The results of the evaluation experiments using printed alphanumeric characters as an evaluation data set, consisting of approximately 600 samples/class for 62 character classes, show that the recognition rate is 99.34% for rotated characters while it is 99.59% for non rotated characters. We have empirically confirmed that the rotated characters given as the training data set do not negatively affect significantly to recognition of non rotated characters. Moreover, 437 characters extracted from 50 camera-captured scenes were correctly recognized and the feasibility of real world application of our method has been confirmed. Finally we describe on three dimensional rotation angle estimation of characters for detecting local normal of the surface on which the characters are printed aiming to scene analysis by shape from characters.
Archive | 2009
K. Nakayama; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura; Shinji Tsuruoka; Kiyotsugu Sekioka
We propose a new speckle reduction algorithm for clinical echocardiograms. The proposed method employs Wavelet Shrinkage to reduce the noise on an ultrasonic signal. In the wavelet shrinkage, at first, original ultrasonic signal is decomposed into wavelet coefficients by multiresolution de- composition using orthogonal wavelet. A threshold of which objective is suppression of noise component is estimated on the resultant complex wavelet coefficient. Wavelet coefficients corresponding to noise are eliminated by soft-thresholding. The noise reduction by the wavelet shrinkage can remove specific frequency components. In this study, we employed the RI-Spline wavelet that was proposed as a shift-invariant moth- er wavelet. We conducted experiments using clinical ultrasonic signals to evaluate the noise reduction performance of the proposed method. Ten clinical subjects were used in the ex- periments. The experimental results show that the algorithm provides superior performance on speckle and noise reduction compared to that of existing speckle reduction method.
document analysis systems | 2006
Mayo Murata; Lazaro S. P. Busagala; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
Digitization process of various printed documents involves generating texts by an OCR system for different applications including full-text retrieval and document organizations. However, OCR-generated texts have errors as per present OCR technology. Moreover, previous studies have revealed that as OCR accuracy decreases the classification performance also decreases. The reason for this is the use of absolute word frequency as feature vector. Representing OCR texts using absolute word frequency has limitations such as dependency on text length and word recognition rate consequently lower classification performance due to higher within-class variances. We describe feature transformation techniques which do not have such limitations and present improved experimental results from all used classifiers.
international conference on frontiers in handwriting recognition | 2012
Takashi Ito; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
This paper proposes a new SVM based technique for combining signature verification techniques using off-line features and on-line features. The off-line feature based technique employs gradient feature vector representing the shape of signature image, and the on-line feature based technique employs dynamic programming (DP) matching technique for time series data of the signatures. The final decision (verification) is performed by SVM based on output from those off-line and online techniques. In the evaluation test the proposed technique achieved 92.96% verification accuracy, which is 1.4% higher than the better accuracy obtained by the individual techniques. This result shows that combining multiple techniques by SVM improves signature verification accuracy significantly.
IEICE Transactions on Information and Systems | 2008
Lazaro S. P. Busagala; Wataru Ohyama; Tetsushi Wakabayashi; Fumitaka Kimura
Feature transformation in automatic text classification (ATC) can lead to better classification performance. Furthermore dimensionality reduction is important in ATC. Hence, feature transformation and dimensionality reduction are performed to obtain lower computational costs with improved classification performance. However, feature transformation and dimension reduction techniques have been conventionally considered in isolation. In such cases classification performance can be lower than when integrated. Therefore, we propose an integrated feature analysis approach which improves the classification performance at lower dimensionality. Moreover, we propose a multiple feature integration technique which also improves classification effectiveness.