Katsufumi Inoue
Osaka Prefecture University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katsufumi Inoue.
international conference on computer vision | 2009
Katsufumi Inoue; Koichi Kise
Nearest neighbor search of feature vectors representing local features is often employed for specific object recognition. In such a method, it is required to store many feature vectors to match them by distance calculation. The number of feature vectors is, in general, so large that a huge amount of memory is needed for their storage. A way to solve this problem is to skip the distance calculation because no feature vectors need to be stored if there is no need to calculate the distance. In this paper, we propose a method of object recognition without distance calculation. The characteristic point of the proposed method is to use a Bloomier filter, which is far memory efficient than hash tables, for storage and matching of feature vectors. From experiments of planar and 3D specific object recognition, the proposed method is evaluated in comparison to a method with a hash table.
Procedia Computer Science | 2015
Katsufumi Inoue; Takami Shiraishi; Michifumi Yoshioka; Hidekazu Yanagimoto
Abstract Hand sign recognition is one of most challenging issues in computer vision and human computer interaction, and many researchers tackle this issue. In this research, we focus on JFSL (Japanese Finger-spelled Sign Language) which is one of hand signs. The tasks for achieving high performance of JFSL recognition as well as other hand signs are how to extract hand region precisely and how to recognize hand signs accurately. To deal with the former task, in this paper, we propose an automatic hand region extraction method with a depth sensor. The characteristic points of our proposed method are to utilize Time-Series Curve, which is one of contour features, and to extract hand region accurately without wearing landmark object such as a color wristband. On the other hand, to tackle the latter task, in this research, we focus on a deep neural network based recognition method since such a method is reported that it allows us to achieve high performance for various recognition tasks. Therefore, in this paper, we investigate JFSL recognition performance with a deep neural network approach compared to that with the conventional image recognition method (HOG+SVM). From the experimental results with 8 subjects, we have confirmed that our proposed method allows us to extract hand region accurately regardless of subjects and JFSL signs. In addition, from the experimental results with a deep neural network based recognition method for JFSL recognition, we have achieved at least average recognition rate over 88%.
2017 IEEE Winter Applications of Computer Vision Workshops (WACVW) | 2017
Jumpei Yamamoto; Katsufumi Inoue; Michifumi Yoshioka
Human behavior analysis based on surveillance camera is one of hot topics in security, marketing as well as computer vision and pattern recognition, and these are useful for commercial facilities such as convenience stores or book stores. In general, since surveillance camera is placed on the ceiling near store wall to monitor customer behaviors, the majority of this research utilize human model adapted to a front view of the person because the human shape has high discriminative power for human detection, pose estimation, etc. However, this approach has a problem that customers are often occluded by others in the store. To solve this occlusion problem for behavior analysis, we propose a new human behavior analysis method with top-view depth camera. In this research, for the first step to investigate the effectiveness of the analysis, we suppose the book store situation. Our proposed method is composed two behavior estimators. The first estimator is based on height of hand with depth information and the second is based on SVM with depth and PSA (Pixel State Analysis) based features. The characteristic point of our proposed method is that we utilize these estimators by cascading them. From experimental results with 10 behaviors of 3 subjects, although our research is still exploratory, we have confirmed that our proposed method allows us to obtain 89.5% of average estimation accuracy.
international joint conference on artificial intelligence | 2018
Tsukasa Okumura; Shuichi Urabe; Katsufumi Inoue; Michifumi Yoshioka
Recently people can easily obtain wearable cameras and it is easy to record egocentric videos by using them. Therefore, daily activity recognition from egocentric videos is one of the hot topics in computer vision. In this research, we propose a new cooking activity recognition method for egocentric videos. Our proposed method has following characteristic points; 1) hand regions are detected with bounding box by using SSD, 2) hand keypoints(articular points) are estimated by using OpenPose, and hand features are extracted from the keypoint positions and 3) fully-connected multi layer neural network is utilized to recognize cooking activities with the extracted features. From the experimental results with our benchmark including eight cooking activities, we have confirmed that our proposed method allows us to recognize cooking activities with 58.9% on average.
international joint conference on artificial intelligence | 2018
Shuichi Urabe; Katsufumi Inoue; Michifumi Yoshioka
Recently activities recognition in egocentric videos using wearable camera is one of the hot topic in computer vision. We mainly focus on the problem of cooking activities recognition in egocentric videos. An accurate cooking activities recognition enable us to realize cooking assistance service. In this paper, for the first step to realize cooking assistance service, we make a new dataset which consists of 8 cooking activities which are depending on different hand motions and utensils. Then we propose a new cooking activities recognition method in egocentric videos. Our proposed method has three characteristic points; 1) hand regions are detected in egocentric videos including cluttered background, 2) the region around hand is extracted based on detected hand region to reduce the effects of cluttered background, 3) cooking activities recognition by combining appearance-based approach using 2D convolutional neural networks (2DCNN) and motion-based approach using 3D convolutional neural networks (3DCNN). From experimental results, we have confirmed that our proposed method allows us to recognize cooking activities with 97.5% accuracy on our new dataset and the performance of activities recognition is improved by combined appearance-based 2DCNN and motion-based 3DCNN. We also investigate the environmental robustness of our proposed method.
soft computing | 2017
Koji Kita; Michifumi Yoshioka; Katsufumi Inoue
A bilateral filter which is a general noise removal method is an edge preserving type noise removal filter and is known to have an effect of emphasizing an edge together with a noise removal effect. In this paper, we describe an image super resolution method applying this feature. Experimental results are not totally different from the conventional methods in both objective evaluation (PSNR) and subjective evaluation. However, the proposed method is resistant to the degree of deterioration (enlargement ratio), and according to a new objective evaluation focusing on edges, it was possible to confirm the sharpening effect exceeding the conventional methods.
Archive | 2009
Katsufumi Inoue; Hiroshi Miyake; Kouichi Kise
한국멀티미디어학회 국제학술대회 | 2009
Katsufumi Inoue; Koichi Kise
Archive | 2010
Katsufumi Inoue; Koichi Kise
Archive | 2010
Katsufumi Inoue; Kouichi Kise