Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Motomasa Tomida is active.

Publication


Featured researches published by Motomasa Tomida.


Intelligent Service Robotics | 2010

Quick and accurate estimation of human 3D hand posture

Kiyoshi Hoshino; Motomasa Tomida

We propose a three-dimensional hand posture estimation system that can retrieve a hand posture image most similar to the input data from a non-multilayer database. Our system uses, at the first stage, coarse screening by the proportional information on the hand images, which roughly correspond to forearm rotation or bending of the thumb or four fingers; then, at the second stage, performs a detailed search for similarity for selected candidates. To describe forearm rotation, and wrist’s internal and external rotations, bending and stretching, no separate processes were used for estimating the corresponding joint angles. By estimating the sequential images of the finger shape using this method, we successfully realized a process involving a joint angle estimation error within two or three degrees, a processing time of approximately 80 fps or more, using only one Note PC and high-speed camera, even when the wrist was freely rotating. Since the image information and the joint angle information are paired in the database, as well as the wrist joint, the system can generate the imitative motions as those of the fingers and wrist of a human being with no time delay by means of a robot, by outputting the estimation results to the robot hand.


international conference on bioinformatics and biomedical engineering | 2016

Hand motion capture for medical usage

Kiyoshi Hoshino; Sota Sugimura; Motomasa Tomida; Naoki Igo; Isao Kawano; Masahiko Sumitani

The authors propose, not a non-contact devise which our research group has been developing, but a compact wearable device which allows for estimation of the hand pose (hand motion capture) of the subject. The devise has a miniature wireless RGB camera on the back side of users hand, rather than the palm side. Attachment of the small camera on the back side of the hand may make it possible to minimize the restraint on the subjects motions during motion capture. The conventional techniques attach the camera on the palm side of the hand for the reason that the images of fingertips always need to be captured by the camera before any other parts of the hand. In contrast, our image processing algorism, the authors propose here, is capable of estimating the hand pose with no need for capturing first the fingertips. Our system allows doing hand and finger motion capture without his psychological burden and physical constraints in clinical diagnosis.


ieee/sice international symposium on system integration | 2014

Development of wearable hand capture detecting hand posture in immersive VR system

Motomasa Tomida; Kiyoshi Hoshino

The objective of this study is to provide a device capable of detecting all the bending angles of finger joints at an execution speed of 60fps or higher with a high degree of accuracy to achieve the same grasping and pinching motions in an immersive VR system as those in a real space. To make the device smaller in size and lighter in weight, we propose a method for detecting the gestures of the hand with a wearable hand capture device composed of a small-size camera and a single board computer (hand posture estimation device) by estimating the hand postures through image recognition. The study group of the authors and others achieved high-speed and high-accuracy hand posture estimation by two-stage database searching using two types of image information, image shape ratio and image feature quantity. This study has selected an algorithm, which was further accelerated by compressing the dimension of the image feature quantity from 1600 to 64. Moreover, an immersive VR system integrating the test model for the device was developed to assess device performance. The result of the assessment experiment showed that many of the users may master easily how to operate the device because their own hand postures appears on the VTR system screen, suggesting that the device is useful.


Key Engineering Materials | 2014

Visual-Servoing Control of Robot Hand with Estimation of Full Articulation of Human Hand

Motomasa Tomida; Kiyoshi Hoshino

A depth sensor or depth camera is available at a reasonable cost in recent years. Due to the excessive dispersion of depth values outputted from the depth camera, however, changes in the pose cannot be directly employed for complicated hand pose estimation. The authors therefore propose a visual-servoing controlled robotic hand with RGB high-speed cameras. Two cameras have their own database in the system. Each data set has proportional information of each hand image and image features for matching, and joint angle data for output as estimated results. Once sequential hand images are recorded with two high-speed RGB cameras, the system first selects one database with bigger size of hand region in each recorded image. Second, a coarse screening is carried out according to the proportional information on the hand image which roughly corresponds to wrist rotation, or thumb or finger extension. Third, a detailed search is performed for similarity among the selected candidates. The estimated results are transmitted to a robot hand so that the same motions of an operator is reconstructed in the robot without time delay.


Tenth International Conference on Quality Control by Artificial Vision | 2011

Mobile robot control using 3D hand pose estimation

Kiyoshi Hoshino; Takuya Kasahara; Naoki Igo; Motomasa Tomida; Takanobu Tanimoto; Toshimitsu Mukai; Gilles Brossard; Hajime Kotani

We propose a mobile robot control method using 3D hand pose estimation without using sensors or controllers. The hand pose estimation we propose reduces the number of image features per data set, which makes the construction of a large-scale database possible, as well as estimation of the 3D hand poses of unspecified users with individual differences without sacrificing estimation accuracy. The system involves the construction in advance of a large database comprising three elements: hand joint information including the wrist, low-order proportional information on the hand images indicating the rough hand shape, and hand pose data comprised of low-order image features per data set.


international conference on computer graphics and interactive techniques | 2010

Gesture-world technology: 3D hand pose estimation system for unspecified users using a compact high-speed camera

Kiyoshi Hoshino; Motomasa Tomida; Takanobu Tanimoto

This technology allows people to control devices such as computers, communications devices, household appliances, and robots by means of everyday gestures without using sensors or controllers, which employs the high-speed and high-accuracy computer vision technology capable of estimating the human hand and arm poses captured by a compact high-speed camera.


ieee/sice international symposium on system integration | 2010

Gesture-world environment technology for mobile manipulation

Kiyoshi Hoshino; Takuya Kasahara; Naoki Igo; Motomasa Tomida; Toshimitsu Mukai; Kinji Nishi; Hajime Kotani

The aim of this paper is to propose the technology to allow people to control robots by means of everyday gestures without using sensors or controllers. The hand pose estimation we propose reduces the number of image features per data set to 64, which makes the construction of a large-scale database possible. This has also made it possible to estimate the 3D hand poses of unspecified users with individual differences without sacrificing estimation accuracy. Specifically, the system we propose involved the construction in advance of a large database comprising three elements: hand joint information including the wrist, low-order proportional information on the hand images to indicate the rough hand shape, and hand pose data comprised of 64 image features per data set. To estimate a hand pose, the system first performs coarse screening to select similar data sets from the database based on the three hand proportions of the input image, and then performed a detailed search to find the data set most similar to the input images based on 64 image features. Using subjects with varying hand poses, we performed joint angle estimation using our hand pose estimation system comprised of 750,000 hand pose data sets, achieving roughly the same average estimation error as our previous system, about 2 degrees. However, the standard deviation of the estimation error was smaller than in our previous system having roughly 30,000 data sets: down from 26.91 degrees to 14.57 degrees for the index finger PIP joint and from 15.77 degrees to 10.28 degrees for the thumb. We were thus able to confirm an improvement in estimation accuracy, even for unspecified users. Further, the processing speed, using a notebook PC of normal specifications and a compact high-speed camera, was about 80 fps or more, including image capture, hand pose estimation, and CG rendering and robot control of the estimation result.


Journal of robotics and mechatronics | 2009

3D Hand Pose Estimation Using a Single Camera for Unspecified Users

Kiyoshi Hoshino; Motomasa Tomida


Archive | 2009

Device method and program for human hand posture estimation

Kiyoshi Hoshino; Motomasa Tomida


Journal of robotics and mechatronics | 2012

Gesture-World Environment Technology for Mobile Manipulation – Remote Control System of a Robot with Hand Pose Estimation –

Kiyoshi Hoshino; Takuya Kasahara; Motomasa Tomida; Takanobu Tanimoto

Collaboration


Dive into the Motomasa Tomida's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naoki Igo

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Isao Kawano

Japan Aerospace Exploration Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge