Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobutaka Kimura is active.

Publication


Featured researches published by Nobutaka Kimura.


conference of the industrial electronics society | 2009

A human tracking mobile-robot with face detection

Satoru Suzuki; Yasue Mitsukura; Hironori Takimoto; Takanari Tanabata; Nobutaka Kimura; Toshio Moriya

In this paper, we propose the face detection method for tracking a human by a mobile-robot. We obtain images from a web camera, and detect faces by focusing on skin colors and eyes as facial features. If we detect faces from images, we trace the detected human, take a picture of him/her, and print it automatically by using the mobile-robot. In order to show the effectiveness of the proposed method, we show the experimental results. Firstly, in the face detection, we show the face detection accuracy. Then, in the human tracking with mobile-robot by using face detection, we show the tracking performance.


electronic imaging | 2006

Integral videography of high-density light field with spherical layout camera array

Takafumi Koike; Michio Oikawa; Nobutaka Kimura; Fumiko Beniyama; Toshio Moriya; Masami Yamasaki

We propose a spherical layout for a camera array system when shooting images for use in Integral Videography (IV). IV is an autostereoscopic video image technique based on Integral Photography (IP) and is one of the preferred autostereoscopic techniques for displaying images. There are many studies on autostereoscopic displays based on this technique indicating its potential advantages. Other camera arrays have been studied, but their purpose addressed other issues, such as acquiring high-resolution images, capturing a light field, creating contents for non-IV-based autostereoscopic displays and so on. Moreover, IV displays images with high stereoscopic resolution when objects are displayed close to the display. As a consequence, we have to capture high-resolution images in close vicinity to the display. We constructed the spherical layout for the camera array system using 30 cameras arranged in a 6 by 5 array. Each camera had an angular difference of 6 degrees, and we set the cameras to the direction of the sphere center. These cameras can synchronously capture movies. The resolution of the cameras is a 640 by 480. With this system, we determined the effectiveness of the proposed layout of cameras and actually captured IP images, and displayed real autostereoscopic images.


ieee/sice international symposium on system integration | 2015

Mobile dual-arm robot for automated order picking system in warehouse containing various kinds of products

Nobutaka Kimura; Kiyoto Ito; Taiki Fuji; Keisuke Fujimoto; Kanako Esaki; Fumiko Beniyama; Toshio Moriya

We have prototyped a mobile dual-arm robot and an automated order picking system including the robot for warehouses that contain various kinds of products. By using self-localization, model-based object recognition and arm trajectory planning, the robot can autonomously move to the front of the shelf that has a target product, fetch the product from the shelf and put the product in a carton transported by an automated guided vehicle. Especially for adjusting to various kinds of products and their storage situations, the robot makes two arms collaborate and mounts tables to lift and rotate two arms and four different types of end effectors. In experimental tests, the robot successfully picked out bottles in a case and picked up two different sizes of boxes directly placed on shelves with different heights. In the result, our proposed system can automate whole order picking operations in warehouses where the operations are currently performed by workers.


Advanced Robotics | 2012

Real-Time Updating of 2D Map for Autonomous Robot Locomotion Based on Distinction Between Static and Semi-Static Objects

Nobutaka Kimura; Keisuke Fujimoto; Toshio Moriya

Abstract Autonomous mobile robots are increasingly being used in complex 2D environments such as factories, warehouses, and offices. For such environments, we propose a real-time technique for updating an environmental map for a robot’s self-localization using a bearing-range sensor in situations where a basis map can be preliminarily prepared. These environments include many semi-static objects such as cardboard boxes, and the locations of these objects change frequently. Therefore, the self-localization needs to reflect the changes in both the existence and position of semi-static objects in the map in real-time. However, if the robot uses a traditional technique that updates all objects and if it keeps updating the map for a long period, static objects such as walls will move slightly on the map due to errors of both measurement and self-localization, and the map will be distorted. Therefore, our technique distinguishes between static and semi-static objects on the map, and it defines the changeability of the occupancy probability of every spatial grid in order to update the map without changing the occupancy probabilities of grids around static objects. By doing so, we prevented the map from being distorted. In addition, by estimating the grids’ statuses during two observations of the same grids and by changing the probabilities of the objects’ fixedness based on the statuses, our technique can robustly distinguish the objects on the map even if the timing of observing grids is irregular.


Proceedings of SPIE | 2012

Method for fast detecting the intersection of a plane and a cube in an octree structure to find point sets within a convex region

Keisuke Fujimoto; Nobutaka Kimura; Toshio Moriya

Performing efficient view frustum culling is a fundamental problem in computer graphics. In general, an octree is used for view frustum culling. The culling checks the intersection of each octree node (cube) against the planes of the view frustum. However, this involves many calculations. We propose a method for fast detecting the intersection of a plane and a cube in an octree structure. When we check which child of the octree node intersects a plane, we compare the coordinates of the corner of the node and the plane. Using an octree, we calculate the vertices of the child node by using the vertices of the parent node. To find points within a convex region, a visibility test is performed by AND operation with the result of three or more planes. In experiments, we tested the problem of searching for the visible point with a camera. The method was two times faster than the conventional method, which detects a visible octree node by using the inner product of the plane and each corner of the node.


Proceedings of SPIE | 2009

Geometric alignment for large point cloud pairs using sparse overlap areas

Keisuke Fujimoto; Nobutaka Kimura; Fumiko Beniyama; Toshio Moriya; Yasuichi Nakayama

We present a novel approach for geometric alignment of 3D sensor data. The Iterative Closest Point (ICP) algorithm is widely used for geometric alignment of 3D models as a point-to-point matching method when an initial estimate of the relative pose is known. However, the accuracy of the correspondence between point and point is difficult when the points are sparsely distributed. In addition, the searching cost is high because the ICP algorithm requires a search of the nearest-neighbor points at every minimization. In this paper, we describe a plane-to-plane registration method. We define the distance between two planes and estimate the translation parameter by minimizing the distance between the planes. The plane-to-plane method is able to register the set of scatter points which are sparsely distributed and the density is low with low cost. We tested this method with the large scatter points of a manufacturing plant and show the effectiveness of our proposed method.


systems, man and cybernetics | 2016

Placement-position search technique for packing various objects with robot manipulator

Kanako Esaki; Nobutaka Kimura; Kiyoto Ito

In this study, a placement-position search technique for robot-executable, space-efficient packing of various objects with a robot manipulator has been developed. The technique is used to enable placing objects in order from the corner of a container, fulfilling the following conditions: (A) the corner is within the reach of a robot manipulator, and (B) the robot manipulator does not collide with surrounding obstacles when it moves an object to the corner. The technique is as follows: a robot manipulator holding an object alternates between motions in a straight line and random reflections from a surface of either a container or already-packed objects in a simulation of robot movement. The motions and reflections are under physical constraints such as the reach of the robot manipulator and collisions between the robot and surrounding obstacles. The technique was evaluated with numerical examples based on the CAD data of an actual industrial-robot-manipulator using five types of objects, and the robot-packable corner for each object was determined. We therefore concluded that the technique is essential for the packing of various objects with a robot manipulator. The technique is expected to be used for packing with a robot manipulator in warehouses and factories.


Proceedings of SPIE | 2014

A robust method for online stereo camera self-calibration in unmanned vehicle system

Yu Zhao; Nobuhiro Chihara; Tao Guo; Nobutaka Kimura

Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature points registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as well as other environment recognition systems.


conference of the industrial electronics society | 2009

Aligning the real space with the model on see-through typed HMD for mixed reality

Taiki Fuji; Yasue Mitsukura; Takanari Tanabata; Nobutaka Kimura; Toshio Moriya

In this paper, we propose an approach to align the real space with the three-dimensional (3D) model for outdoor wearable mixed reality (MR) system. In our approach, we use a monocular see-through typed head-mounted display (ST-HMD) and a virtual reality (VR) sensor. We can measure six degree-of-freedom (6DOF) using this sensor. In the default setting, it is difficult and burden for people to handle the 3D model using air mouse. Therefore, we reduce the burden by automating the default setting. Moreover, when the user changes the viewpoint, we need to change the computer graphics (CG) model on ST-HMD with corresponding to the real objects of the difference in vision. We obtain the translation and rotation data from VR sensor to reflect the CG model. Then, we structured the alignment system in this system. Furthermore, in order to evaluate the proposed alignment for three-dimensional (3D) model, we show some results of the wearable aligning system using the 3D model.


machine vision applications | 2007

Camera calibrationless vision calibration for transfer robot system

Nobutaka Kimura; Toshio Moriya; Kohsei Matsumoto

This research was focused on a system in which a manipulator with robot vision transfers objects to a mobile robot that moves on a flat floor. In this system, an end effector of a manipulator operates on a plane surface, so only single vision is required. In a robot vision system, many processes are usually needed for vision calibration, and one of them is measurement of camera parameters. We developed a calibration technique that does not explicitly require camera parameters to reduce the number of calibration processes required for our researched system. With this technique, we measured relations between coordinate systems of images and a mobile robot in the moving plane by using a projective transformation framework. We also measured relations between images and the manipulator in an arbitrary plane in the same way. By analyzing the results, we obtained a relation between the mobile robot and the manipulator without explicitly calculating the camera parameters. This means capturing images of the calibration board can be skipped. We tested the calibration technique using an object-transfer task. The results showed the technique has sufficient accuracy to achieve the task.

Collaboration


Dive into the Nobutaka Kimura's collaboration.

Top Co-Authors

Avatar

Keisuke Fujimoto

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Keisuke Fujimoto

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge