Haruhisa Okuda
Mitsubishi Electric
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Haruhisa Okuda.
international conference on robotics and automation | 2014
Yukiyasu Domae; Haruhisa Okuda; Yuichi Taguchi; Kazuhiko Sumi; Takashi Hirai
We present a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Our method represents a gripper model by using two mask images, one describing a contact region that should be filled by a target object for stable grasping, and the other describing a collision region that should not be filled by other objects to avoid collisions during grasping. The graspability measure is computed by convolving the mask images with binarized depth maps, which are thresholded differently in each region according to the minimum height of the 3D points in the region and the length of the gripper. Our method does not assume any 3-D model of objects, thus applicable to general objects. Our representation of the gripper model using the two mask images is also applicable to general grippers, such as multi-finger and vacuum grippers. We apply our method to bin picking of piled objects using a robot arm and demonstrate fast pick-and-place operations for various industrial objects.
IEEE Transactions on Consumer Electronics | 2006
Haruhisa Okuda; Manabu Hashimoto; Kazuhiko Sumi; Shun'ichi Kaneko
To realize fast and robust digital image stabilization, in this paper, we propose a new optimum motion estimation algorithm under various illuminations, occlusion and blooming. In this algorithm, we expand our original fast template matching method named HDTM (hierarchical distributed template matching). First, only useful reference blocks that are indispensable for accurate motion estimation are selected with its reliability and consistency on pose estimation. Next, using the LMedS method, motion vectors of these blocks are segmented to some groups and only reliable ones are used for whole motion estimation. With experimental results, we could see small errors less than /spl plusmn/0.1 pixels, /spl plusmn/0.1 degrees and /spl plusmn/0.1% scales were achieved under various kinds of disturbance. And the processing time was 11 msec with PII-300 MHz CPU in typical case. It is enough to realize real-time processing for embedded use of image stabilization.
international conference on robotics and automation | 1995
Kazuhiko Sumi; Manabu Hashimoto; Haruhisa Okuda
This paper presents our real-time vision system for robots. The system employs a new internal image representation, in which the scene is encoded into three intensity level images. This representation is generated by Laplacian-Gaussian filtering followed by dual-thresholding. We refer to this image as three-level broad-edge representation. It reduces the computational cost of the normalized cross correlation. Thus the system performs the searching visual task as well as the object tracking task, which is not reliably achieved by the mean absolute error based correlators. Our prototype system achieves 32/spl times/32 pixel normalized cross correlation based matching from the 128/spl times/128 pixel search area in 2 ms under 9 MHz pixel clock image processor. This speed is fast enough for searching and tracking a single object at video frame rate.
international conference on robotics and automation | 2010
Ming-Yu Liu; Oncel Tuzel; Ashok Veeraraghavan; Rama Chellappa; Amit K. Agrawal; Haruhisa Okuda
We propose a novel solution to object detection, localization and pose estimation with applications in robot vision. The proposed method is especially applicable when the objects of interest may not be richly textured and are immersed in heavy clutter. We show that a multi-flash camera (MFC) provides accurate separation of depth edges and texture edges in such scenes. Then, we reformulate the problem, as one of finding matches between the depth edges obtained in one or more MFC images to the rendered depth edges that are computed offline using 3D CAD model of the objects. In order to facilitate accurate matching of these binary depth edge maps, we introduce a novel cost function that respects both the position and the local orientation of each edge pixel. This cost function is significantly superior to traditional Chamfer cost and leads to accurate matching even in heavily cluttered scenes where traditional methods are unreliable. We present a sub-linear time algorithm to compute the cost function using techniques from 3D distance transforms and integral images. Finally, we also propose a multi-view based pose-refinement algorithm to improve the estimated pose. We implemented the algorithm on an industrial robot arm and obtained location and angular estimation accuracy of the order of 1 mm and 2° respectively for a variety of parts with minimal texture.
machine vision applications | 1997
Miwako Hirooka; Kazuhiko Sumi; Manabu Hashimoto; Haruhisa Okuda; Shinichi Kuroda
We propose Hierarchical Distributed Template Matching, which reduces the computational cost of template matching, while maintaining the same reliability as conventional template matching. To achieve cost reduction without loss of reliability, we first evaluate the correlation of shrunken images in order to select the maximum depth of the hierarchy. Then, for each level of hierarchy, we choose a small number of template points in the original template and build a sparse distributed template. The locations of the template points are optimized, so that they yield a distinct peak in the correlation score map. Experimental results demonstrate that our method can reduce the computational cost to less than 1/10 that of conventional hierarchical template matching. We also confirmed that the precision is 0.6 pixels.
international conference on robotics and automation | 2011
Nitesh Shroff; Yuichi Taguchi; Oncel Tuzel; Ashok Veeraraghavan; Srikumar Ramalingam; Haruhisa Okuda
Progress in machine vision algorithms has led to widespread adoption of these techniques to automate several industrial assembly tasks. Nevertheless, shiny or specular objects which are common in industrial environments still present a great challenge for vision systems. In this paper, we take a step towards this problem under the context of vision-aided robotic assembly. We show that when the illumination source moves, the specular highlights remain in a region whose radius is inversely proportional to the surface curvature. This allows us to extract regions of the object that have high surface curvature. These points of high curvature can be used as features for specular objects. Further, an inexpensive multi-flash camera (MFC) design can be used to reliably extract these features. We show that one can use multiple views of the object using the MFC in order to triangulate and obtain the 3D location and pose of the shiny objects. Finally, we show a system consisting of a robot arm with an MFC that can perform automated detection and pose estimation of shiny screws within a cluttered bin, achieving position and orientation errors less than 0.5 mm and 0.8° respectively.
society of instrument and control engineers of japan | 2008
Yasuo Kitaaki; Haruhisa Okuda; Hiroshi Kage; Kazuhiko Sumi
This paper describes high speed 3D object recognition based on DAI (depth aspect image) matching and M-ICP (modified iterative closest point). We regards GPU(graphic processing units) as coprocessor which are capable of computation for general purpose. We proposed 3D object recognition method which consists of two step pose estimation and positioning, i.e. the DAI matching for coarse step and HM-ICP (hierarchical M-ICP) for fine one Our method on GPU which has remarkable performance for parallel computation. The experimental results show the effectiveness of our method. This method can process 2 or 3 times faster than the original one, although the calculation amount of this method is at least 20 times bigger than the original one. Additionally, its processing time is more stabler than original method.
intelligent robots and systems | 2010
Hiroki Dobashi; Akio Noda; Yasuyoshi Yokokohji; Hikaru Nagano; Tatsuya Nagatani; Haruhisa Okuda
In a robotic cell, an assembly robot has to grasp various parts robustly even under some uncertainties in their initial poses. For this purpose, it is necessary to design robust grasping strategies for robotic hands. In this paper, we propose a method to derive an optimal robust grasping strategy from a given initial pose error region of a target object. Based on the pushing operation analysis, it is possible to simulate multi-fingered hand grasping and derive a permissible initial pose error region of a target object from which planned grasping is successful. Adopting an active search algorithm proposed by the authors, we can find the optimal grasping strategy efficiently. As an example, we derive the optimal grasping strategy for grasping a circular object by a three-fingered hand.
Advanced Robotics | 2014
Hiroki Dobashi; Junichi Hiraoka; Takanori Fukao; Yasuyoshi Yokokohji; Akio Noda; Hikaru Nagano; Tatsuya Nagatani; Haruhisa Okuda; Kenichi Tanaka
In a robotic cell, assembly robots have to grasp parts in various shapes robustly and accurately even under some uncertainties in the initial poses of the parts. For this purpose, it is necessary to develop a universal robotic hand and robust grasping strategies, i.e. finger motions that can achieve planned grasping robustly against the initial pose uncertainty of parts. In this paper, we propose a methodology to plan robust grasping strategies of a universal robotic hand for assembling parts in various shapes. In our approach, parts are aligned toward planned configurations during grasping actions, and the robustness of grasping strategies is analyzed and evaluated based on pushing operation analysis. As an application example, we plan robust grasping strategies for assembling a three-dimensional puzzle, and experimentally verify the robustness and effectiveness of the planned strategies for this assembly task. Graphical Abstract
international conference on mechatronics and automation | 2011
Yasuo Kitaaki; Rintaro Haraguchi; Koji Shiratsuchi; Yukiyasu Domae; Haruhisa Okuda; Akio Noda; Kazuhiko Sumi; Toshio Fukuda; Shun'ichi Kaneko; Takayuki Matsuno
In realizing a robotic assembly system of electronic products, recognizing the connectors with flexible cables as a single component is one of the most difficult problems, which prevents from automating the system. To overcome this problem, we used our proprietary 3-D range sensor and developed three component algorithms: one is to recognize randomly-stacked connectors; another is to automatically compensate rotational and positional errors by force sensor; the other is to set up visually-guided offline developing environments. In this paper, we introduce these component algorithms and their integration into an evaluation system in detail.