Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takashi Yoshimi is active.

Publication


Featured researches published by Takashi Yoshimi.


International Journal of Computer Vision | 2002

3D Object recognition in cluttered environments by segment-based stereo vision

Yasushi Sumi; Yoshihiro Kawai; Takashi Yoshimi; Fumiaki Tomita

We propose a new method for 3D object recognition which uses segment-based stereo vision. An object is identified in a cluttered environment and its position and orientation (6 dof) are determined accurately enabling a robot to pick up the object and manipulate it. The object can be of any shape (planar figures, polyhedra, free-form objects) and partially occluded by other objects. Segment-based stereo vision is employed for 3D sensing. Both CAD-based and sensor-based object modeling subsystems are available. Matching is performed by calculating candidates for the object position and orientation using local features, verifying each candidate, and improving the accuracy of the position and orientation by an iteration method. Several experimental results are presented to demonstrate the usefulness of the proposed method.


international conference on computer vision | 1998

Recognition of 3D free-form objects using segment-based stereo vision

Yasushi Sumi; Yoshihiro Kawai; Takashi Yoshimi; Fumiaki Tomita

We propose a new method to recognize 3D free-form objects from their apparent contours. It is the extension of our established method to recognize objects with fixed edges. Object models are compared with 3D boundaries which are extracted by segment-based stereo vision. Based on the local shapes of the boundaries, candidate transformations are generated. The candidates are verified and adjusted based on the whole shapes of the boundaries. The models are built from all-around range data of the objects. Experimental results show the effectiveness of the method.


international conference on pattern recognition | 1992

Reconstruction of 3D objects by integration of multiple range data

Yoshihiro Kawai; Toshio Ueshiba; Takashi Yoshimi; Masaki Oshima

A new approach to integrate range data from multiple viewpoints is described. Complicated objects, for example flowers, are observed by a range finder from multiple viewpoints whose relative position is unknown. After simple segmentation, correspondences of regions between two consecutive data are examined. This matching process relies on regions which are not supposed to be influenced by occlusion. Transformation parameters between two data are calculated by referring to the matching result. An iteration process to minimize difference estimates of corresponding regions is devised so that a more accurate result is obtained. Data observed from multiple viewpoints are transferred to standard coordinates and integrated. Results on some objects, for example flowers, show that so far that this method is promising.<<ETX>>


advanced robotics and its social impacts | 2012

Picking up operation of thin objects by robot arm with two-fingered parallel soft gripper

Takashi Yoshimi; Naoyuki Iwata; Makoto Mizukawa; Yoshinobu Ando

One of the tasks which we expect executing by home service robots is a handling of thin objects like papers or plastic cards. It is a difficult task for the home service robots to pick up a thin object placed on the table, because it needs high dexterity. To achieve this motion by home service robots, we used a two-fingered parallel soft gripper with a soft nail, and constructed a sequence of picking up operation of a paper or a plastic card by a robot arm. The constructed sequence consists of sliding motion and raising motion. The robot slides a paper or a card and hooks its nail at the side of it, and raises one side of it, and picks it up like humans. In this paper, we propose a sequence of picking up operation of a paper or a plastic card on the table derived from the analysis of human motions, and confirm its availability through experiments.


intelligent robots and systems | 1992

Force Controlled Grinding Robot System For Unstructured Tasks

Makoto Jinno; Takashi Yoshimi; Akira Abe

A new force controlled grinding robot has been developed which can safely and efficiently repairing equipments in hostile environments using remote control and automatic control. This robot system incorporates a new method for measuring the grinding force, in which the grinding force is measured from the moment about the grinder head’s center of gravity. The influence of inertid forces caused by translational motion is removed so that high stability is achieved. Furthermore, this robot can change the grinder’s orientation to follow the surface of an object automatically, and can also grind the surface of an object into a desired shape. In this paper, the authors p r o p a new method for measuring the grinding force and also force control methods for unstructured grinding tasks. carry out lmmctured grinding tasks such as


intelligent robots and systems | 2012

Object placement planner for robotic pick and place tasks

Kensuke Harada; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Hiromu Onda; Takashi Yoshimi; Yoshihiro Kawai

This paper proposes an object placement planner for a grasped object during pick-and-place tasks. The proposed planner automatically determines the pose of an object stably placed near a user assigned point on an environment surface. The proposed method first constructs a polygon model of the surrounding environment, and then clusters the polygon model of both the environment and the object where each cluster is approximated by a planar region. The placement of the object can be determined by selecting a pair of clusters between the object and the environment. We further impose several conditions to determine the pose of the object placed on the environment. We show that we can determine the position/orientation of the object placed on the environment for several cases such as hanging a mug cup on a bar. The effectiveness of the proposed research is confirmed through several numerical examples.


ieee/sice international symposium on system integration | 2014

Project on Development of a Robot System for Random Picking-Grasp/manipulation planner for a dual-arm manipulator

Kensuke Harada; Takashi Yoshimi; Yasuyo Kita; Kazuyuki Nagata; Natsuki Yamanobe; Toshio Ueshiba; Yutaka Satoh; Takeshi Masuda; Ryuichi Takase; Takao Nishi; Takeshi Nagami; Yoshihiro Kawai; Osamu Nakamura

This research develops a robotic vision and manipulation technologies for random bin-picking. Especially, we focus on the manipulation technology where a dual-arm manipulator first picks up an object from the pile, then regrasps it from the right hand to the left hand, and finally places it to the fixture. We first explain an overview of our research project. Then, we explain about the grasp/manipulation planner for a dual-arm manipulator. Here, our grasp planner is well applied for objects which can be approximated by multiple cylinders. Also, our pick-and-place planner can effectively find a regrasping posture of an object. In addition to the manipulation technology, we briefly explain about our vision technology to measure the position/orientation of an object. Finally, to show the effectiveness of our proposed approach, we show an experimental result of a dual-arm manipulator.


ieee/sice international symposium on system integration | 2014

Research on person following system based on RGB-D features by autonomous robot with multi-Kinect sensor

Kouyou Shimura; Yoshinobu Ando; Takashi Yoshimi; Makoto Mizukawa

In this study, we develop a person following system based on RGB-D features by autonomous robot with multi-Kinect sensor. The sensor system is composed of multiple Kinect sensors, we call it “Multi-Kinect”. Since the field of view of a single Kinect sensor is narrow, this sensor makes it wider by using three Kinect sensors. Furthermore, the system selects a person who raises hand or says “Start” as the target of following at the beginning. Then, the system can identify a person from the person location and characteristics of clothes and re-recognizes the target of following when an occlusion occurs. Herewith, a mobile robot with this sensor accomplishes a human detection and follows only the target of following even if more than two people is located around the robot.


international conference on pattern recognition | 2008

3D object localization based on occluding contour using STL CAD model

Kenichi Maruyama; Yoshihiro Kawai; Takashi Yoshimi; Fumiaki Tomita

This paper describes a method to localize 3D objects, which is the extension of the segment-based object recognition method to use on a STL CAD model. Models for localization are automatically generated using contour generators, which are estimated by occluding contours of projected images of the CAD model from multiple viewing directions and depth images computed with a graphics accelerator. In addition, the model is dynamically updated in a recognition process according to the viewing direction. Localization process is based onmultiple hypotheses verification using the model and 3D boundaries reconstructed from stereo images. Experimental results show the effectiveness of the proposed method.


international conference on ubiquitous robots and ambient intelligence | 2012

A Trajectory generation of cloth object folding motion toward realization of housekeeping robot

Syohei Shibata; Takashi Yoshimi; Makoto Mizukawa; Yoshinobu Ando

In our research, we aim to construct a house-keeping robot system which folds laundry. And, we propose a way for folding clothes without vision sensing device. To realize this system, we prepare appropriate fixed folding motion. We analyzed cloth folding motions by human. And, we found fixed folding motion pattern for the robot. Furthermore, we generated a trajectory of folding motion which is applicable to different size of towels. We confirmed the effectiveness of our method through experiments without the use of vision sensing device.

Collaboration


Dive into the Takashi Yoshimi's collaboration.

Top Co-Authors

Avatar

Makoto Mizukawa

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshinobu Ando

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fumiaki Tomita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshihiro Kawai

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Nobuto Matsuhira

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hideichi Nakamoto

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuhki Ishiguro

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Motomasa Tanaka

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshio Maeda

Shibaura Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge