Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiromitsu Nishimura is active.

Publication


Featured researches published by Hiromitsu Nishimura.


international conference on document analysis and recognition | 2001

Off-line hand-written character recognition using integrated 1D HMMs based on feature extraction filters

Hiromitsu Nishimura; M. Tsutsumi

The purpose of our research is to improve the recognition rate of an off-line handwritten character recognition system using HMM (hidden Markov model), so that we can use the system for practical application. Due to the insufficient recognition rate of ID HMM character recognition systems and the requirement for a huge number of learning samples to construct 2D HMM character recognition systems, HMM-based character recognition systems have not yet achieved sufficient recognition performance for practical use. In this research, we propose the character recognition method that integrates 4 simply structured 1D HMMs all of which are based on feature extraction using linear filters. The results of our evaluation experiment using the Hand-Printed Character Database (ETL6) showed that the first rank recognition rate of the test samples was 98.5% and that the cumulative recognition rate of top 3 candidates was 99.3%. Although our method is relatively easy to implement, it can work even better than 2D HMM method. These results show the proposed method is very effective.


international conference on human interface and management of information | 2016

Basic Investigation for Improvement of Sign Language Recognition Using Classification Scheme

Hirotoshi Shibata; Hiromitsu Nishimura; Hiroshi Tanaka

Sign language is a commonly-used communication method for hearing-impaired or speech-impaired people. However, it is quite difficult to learn sign language. If automatic translation for sign language can be realized, it becomes very meaningful and convenient not only for impaired people but also physically unimpaired people. The cause of the difficulty in automatic translation is that there are so many variations in sign language motions, which degrades recognition performance. This paper presents a recognition method for maintaining the recognition performance for many sign language motions. A scheme is introduced to classification using a decision tree, which can decrease the number of words to be recognized at a time by dividing them into groups. The used hand, the characteristics of hand motion and the relative position between hands and face have been considered in creating the decision tree. It is confirmed by experiments that the recognition success rate increased from 41 % and 59 % to 59 % and 82 %, respectively, for a basic 17 words of sign language with four sign language operators.


international conference on human interface and management of information | 2013

Basic investigation into hand shape recognition using colored gloves taking account of the peripheral environment

Takahiro Sugaya; Takayuki Suzuki; Hiromitsu Nishimura; Hiroshi Tanaka

Although infrared cameras are sometimes used for posture and hand shape recognition, they are not used widely. In contrast, visible light cameras are widely used as web cameras and are implemented in mobile and smart phones. We have used color gloves in order to allow hand shapes to be recognized by visible light cameras, which expands both the type of background that can be used and the application areas. It is considered that the hand shape recognition using color gloves can be used to express many patterns and can be used for many applications such as communication and input interfaces, etc. The recognition performance depends on the color information of the color gloves, which is affected by the environment, especially the illumination conditions, that is bright or dim lighting. Hue values are used to detect color in this investigation. The relative finger positions and finger length are used to confirm the validity of color detection. We propose a method of rejecting image frames that includes a color detection error, which will, in turn, give rise to a hand shape recognition error. Experiments were carried out under three different illumination conditions. The effectiveness of the proposed method has been verified by comparing the recognition success ratio of the conventional method and with the results using the proposed methods.


international conference on document analysis and recognition | 2003

Offline character recognition using online character writing information

Hiromitsu Nishimura; T. Timikawa

Recognition of variously deformed character patterns is a salient subject for offline hand-printed character recognition. Sufficient recognition performance for practical use has not been achieved despite reports of many recognition techniques. Our research examines effective recognition techniques for deformed characters, extending conventional recognition techniques using online character writing information containing writing pressure data. This study extends conventional recognition techniques using online character writing information containing writing pressure information. A recognition system using simple pattern matching and HMM was made for evaluation experiments using common hand-printed English character patterns from the ETL6 database to determine effectiveness of the proposed extending recognition method. Character recognition performance is increased in both expansion recognition methods using online writing information.


international conference on human interface and management of information | 2014

Enhancement of Accuracy of Hand Shape Recognition Using Color Calibration by Clustering Scheme and Majority Voting Method

Takahiro Sugaya; Hiromitsu Nishimura; Hiroshi Tanaka

This paper presents methods of enhancing the recognition accuracy of hand shapes in a scheme which is proposed by the authors as being easy to memorize and which can represent much information. To ensure suitability for practical use, the recognition performance must be maintained even when there are changes in the illumination environment. First, a color calibration process using a k-means clustering scheme is introduced as a way of ensuring high performance in color detection. In the proposed method the thresholds for hue values are decided before the recognition process, as a color calibration scheme. The second method of enhancing accuracy involves making a majority decision. Many image frames are obtained from one hand shape before the transition to the next shape. The frames in this hand shape formation time span are used for shape recognition by majority voting based on the recognition results from each frame. It has been verified by carrying out experiments under different illumination conditions that the proposed technique can raise the recognition performance.


international conference on universal access in human-computer interaction | 2017

Investigation of Feature Elements and Performance Improvement for Sign Language Recognition by Hidden Markov Model

Tatsunori Ozawa; Hirotoshi Shibata; Hiromitsu Nishimura; Hiroshi Tanaka

Sign language is commonly used as one means of communication for hearing-impaired or speech-impaired people. However, there are many difficulties in learning sign language. If automatic translation for sign language can be realized, it would be extremely valuable and helpful not just to those who are physically impaired but to unimpaired people as well. The cause of the difficulty in automatic translation is that there are many kinds of specific hand motions and shapes, which make it difficult to discriminate each motion. Consequently, this has a negative impact on accurate recognition. This paper presents a recognition method that is able to maintain accurate recognition of different signs that encompass a multitude hand motions and shapes. The main feature of our approach is the use of colored gloves to detect hand motions and shapes. For our investigation, a recognition scheme using HMM (Hidden Markov Model) has been introduced to enhance recognition performance. In this scheme, performance depends on the feature elements extracted from each sign language motion. Feature elements of sign language motions and their unification are investigated, and the recognition performance is clarified using these feature elements and compared with each result. Although the percentage of recognition successes for each feature element is low, from 21.7% to 42.7%, it was shown that recognition success for the combined element results increased from 55.2% to 61.9% for 25 different sign language motions. In addition, the removal of candidates was also examined to enhance performance as a form of preprocessing using a threshold obtained from DP matching. It is also confirmed through experiments that the recognition success rate increased by a few percentage.


international conference on human-computer interaction | 2016

Study of Posture Estimation System Using Infrared Camera

Airi Yoshino; Hiromitsu Nishimura

In Japan, the number of patients suffering from bedsores has been increasing with the progression of population aging. In this study, we investigated techniques for analyzing posture or motion for the prevention of bedsores. In particular, we propose a method for analyzing the posture of a person lying in bed using depth maps and color images obtained from a Kinect sensor. First, the color images are used to perform color identification of the hair, from which the position of the head is estimated. Hair color by acquiring the color information from each pixel found a part that many belongs. Based on these analyses, it was possible to determine to be the position of the head. Second, the depth maps are divided into four regions: the head region, chest region, abdominal region, and foot region. Based on these analyses, it was possible to determine which of three positions, prone, supine, or sideways, the person was lying down in.


international conference on human-computer interaction | 2018

Investigation of Sign Language Recognition Performance by Integration of Multiple Feature Elements and Classifiers.

Tatsunori Ozawa; Yuna Okayasu; Maitai Dahlan; Hiromitsu Nishimura; Hiroshi Tanaka

Sign languages are used by healthy individuals when communicating with those who are hearing or speech impaired as well by those with hearing or speech impediments. It is quite difficult to acquire sign language skills since there are vast number of sign language words and some signing motions are very complex. Several attempts at machine translation have been investigated for a limited number of sign language motions by using KINECT and a data glove, which is equipped with a strain gauge to monitor the angles at which fingers are bent, to detect hand motions and hand shapes.


complex, intelligent and software intensive systems | 2018

Feasibility Study on Deep Learning Scheme for Sign Language Motion Recognition.

Kazuki Sakamoto; Eiji Ota; Tatsunori Ozawa; Hiromitsu Nishimura; Hiroshi Tanaka

This paper presents the results of a feasibility study of a deep learning scheme for sign language motion recognition. Capturing the motions used in sign language was conducted using specially designed colored gloves and an optical camera. Deep learning and conventional classification schemes were used for motion recognition, and their results are compared. In a deep learning process each frame of motion data is passed directly to AlexNet for feature extraction. Although the structure of the neural network and optional parameters for deep learning have not been optimized at this stage, it was verified that the accuracy of recognition ranged from 59.6% to 72.3% for twenty-five motions. Though this performance is inferior to that of conventional schemes, it is considered that these results indicate the feasibility of using a deep learning scheme for sign language motion recognition.


pacific rim conference on communications, computers and signal processing | 2017

Performance enhancement by combining visual clues to identify sign language motions

Yuna Okayasu; Tatsunori Ozawa; Maitai Dahlan; Hiromitsu Nishimura; Hiroshi Tanaka

This paper presents a sign language recognition method that uses gloves with colored regions and an optical camera. Hand and finger motions can be identified by the movement of the colored regions. The authors propose using six weak cues from each sign language motion, as determined by an HMM (Hidden Markov Model). Decoding and recognition is achieved by detecting characteristic combinations of cues. It was experimentally verified that an accurate recognition rate as high as 62.3% was achieved by looking for six cues per word while observing a list of 25 sign language words.

Collaboration


Dive into the Hiromitsu Nishimura's collaboration.

Top Co-Authors

Avatar

Hiroshi Tanaka

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuhiro Notomi

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Shimeno

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuki Hoshino

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tatsunori Ozawa

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hirotoshi Shibata

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Takahiro Sugaya

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuna Okayasu

Kanagawa Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maitai Dahlan

Chulalongkorn University

View shared research outputs
Top Co-Authors

Avatar

Airi Yoshino

Kanagawa Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge