Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuhiko Sumi is active.

Publication


Featured researches published by Kazuhiko Sumi.


international conference on robotics and automation | 2014

Fast graspability evaluation on single depth maps for bin picking with general grippers

Yukiyasu Domae; Haruhisa Okuda; Yuichi Taguchi; Kazuhiko Sumi; Takashi Hirai

We present a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Our method represents a gripper model by using two mask images, one describing a contact region that should be filled by a target object for stable grasping, and the other describing a collision region that should not be filled by other objects to avoid collisions during grasping. The graspability measure is computed by convolving the mask images with binarized depth maps, which are thresholded differently in each region according to the minimum height of the 3D points in the region and the length of the gripper. Our method does not assume any 3-D model of objects, thus applicable to general objects. Our representation of the gripper model using the two mask images is also applicable to general grippers, such as multi-finger and vacuum grippers. We apply our method to bin picking of piled objects using a robot arm and demonstrate fast pick-and-place operations for various industrial objects.


IEEE Transactions on Consumer Electronics | 2006

Optimum motion estimation algorithm for fast and robust digital image stabilization

Haruhisa Okuda; Manabu Hashimoto; Kazuhiko Sumi; Shun'ichi Kaneko

To realize fast and robust digital image stabilization, in this paper, we propose a new optimum motion estimation algorithm under various illuminations, occlusion and blooming. In this algorithm, we expand our original fast template matching method named HDTM (hierarchical distributed template matching). First, only useful reference blocks that are indispensable for accurate motion estimation are selected with its reliability and consistency on pose estimation. Next, using the LMedS method, motion vectors of these blocks are segmented to some groups and only reliable ones are used for whole motion estimation. With experimental results, we could see small errors less than /spl plusmn/0.1 pixels, /spl plusmn/0.1 degrees and /spl plusmn/0.1% scales were achieved under various kinds of disturbance. And the processing time was 11 msec with PII-300 MHz CPU in typical case. It is enough to realize real-time processing for embedded use of image stabilization.


international conference on mechatronics and automation | 2011

A robotic assembly system capable of handling flexible cables with connector

Yasuo Kitaaki; Rintaro Haraguchi; Koji Shiratsuchi; Yukiyasu Domae; Haruhisa Okuda; Akio Noda; Kazuhiko Sumi; Toshio Fukuda; Shun'ichi Kaneko; Takayuki Matsuno

In realizing a robotic assembly system of electronic products, recognizing the connectors with flexible cables as a single component is one of the most difficult problems, which prevents from automating the system. To overcome this problem, we used our proprietary 3-D range sensor and developed three component algorithms: one is to recognize randomly-stacked connectors; another is to automatically compensate rotational and positional errors by force sensor; the other is to set up visually-guided offline developing environments. In this paper, we introduce these component algorithms and their integration into an evaluation system in detail.


international conference on human-computer interaction | 2016

Micro-Expression Recognition for Detecting Human Emotional Changes

Kazuhiko Sumi; Tomomi Ueda

We propose a method estimating human emotional state in communication from four micro-expressions; mouth motion, head pose, eye sight direction, and blinking interval. Those micro-expressions are picked up by a questionnaire survey of human observers watching on video recorded human conversation. Then we implemented a recognition system for those micro-expressions. We detect facial parts from a RGB-Depth camera, measure those four expressions. Then we apply decision-tree style classifier to detect some emotional state and state changes. In our experiment, we gathered 30 videos of human communicating with his/her friend. Then we trained and validated our algorithm with two-fold cross-validation. We compared the classifier output with human examiners’ observation and confirmed over 70 % precision.


human-agent interaction | 2017

Speech-to-Gesture Generation: A Challenge in Deep Learning Approach with Bi-Directional LSTM

Kenta Takeuchi; Dai Hasegawa; Shinichi Shirakawa; Naoshi Kaneko; Hiroshi Sakuta; Kazuhiko Sumi

In this research, we take a first step in generating motion data for gestures directly from speech features. Such a method can make creating gesture animations for Embodied Conversational Agents much easier. We implemented a model using Bi-Directional LSTM taking phonemic features from speech audio data as input to output time sequence data of rotations of bone joints. We assessed the validity of the predicted gesture motion data by evaluating the final loss value of the network, and evaluating the impressions of the predicted gesture by comparing it with the actual motion data that accompanied the audio data used for input and motion data that accompanied a different audio data. The results showed that the accuracy of the prediction for the LSTM model was better than a simple RNN model. In contrast, the impressions evaluation of the predicted gesture was rated lower than the original and mismatched gestures, although individually some predicted gestures were rated the same degree as the mismatched gestures.


international conference on machine vision | 2017

Two-stage model fitting approach for human body shape estimation from a single depth image

Mei Oyama; Naoshi Kaneko Aoyama; Masaki Hayashi; Kazuhiko Sumi; Takeshi Yoshida

Recovering an accurate 3D human body shape from a single depth image is one of the challenging problems in computer vision due to sensor noises, complexity of human body shapes, and variation of individual body shapes. In this paper, we address the problem using a two-stage model fitting approach. At the first stage, a coarse template model is fitted to the human pose of the input depth image using skeleton deformation. Then the model is fitted to the human shape by Laplacian surface editing. This fitting may corrupt the human-like shape of the template model due to the incompleteness of depth information. Then in the second stage, body shape details are recovered by fitting of a fine model to the deformed template by Stitched Puppet model fitting. Several experiments demonstrate that our approach recovers the most likely body shape of the input and deals with over a variety of input body shapes.


Thirteenth International Conference on Quality Control by Artificial Vision 2017 | 2017

Spatial and temporal segmented dense trajectories for gesture recognition

Kaho Yamada; Takeshi Yoshida; Kazuhiko Sumi; Hitoshi Habe; Ikuhisa Mitsugami

Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.


machine vision applications | 2009

A new approach for in-vehicle camera traffic sign detection and recognition

Andrzej Ruta; Yongmin Li; Fatih Porikli; Shintaro Watanabe; Hiroshi Kage; Kazuhiko Sumi


Journal of robotics and mechatronics | 2011

Development of Production Robot System that can Assemble Products with Cable and Connector

Rintaro Haraguchi; Yukiyasu Domae; Koji Shiratsuchi; Yasuo Kitaaki; Haruhisa Okuda; Akio Noda; Kazuhiko Sumi; Takayuki Matsuno; Shun'ichi Kaneko; Toshio Fukuda


Ieej Transactions on Electronics, Information and Systems | 2004

Real Application of Machine Vision Technology

Kazuhiko Sumi; Shun^|^rsquo; ichi Kaneko

Collaboration


Dive into the Kazuhiko Sumi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yukiyasu Domae

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge