Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toshio Ueshiba is active.

Publication


Featured researches published by Toshio Ueshiba.


european conference on computer vision | 1998

A Factorization Method for Projective and Euclidean Reconstruction from Multiple Perspective Views via Iterative Depth Estimation

Toshio Ueshiba; Fumiaki Tomita

A factorization method is proposed for recovering camera motion and object shapes from point correspondences observed in multiple images with perspective projection. For any factorization-based approaches for perspective images, scaling parameters called projective depths must be estimated in order to obtain a measurement matrix that could be decomposed into motion and shape. One possible approach, proposed by Sturm and Triggs[11], is to compute projective depths from fundamental matrices and epipoles. The estimation process of the fundamental matrices, however, might be unstable if the measurement noise is large or the cameras and the object points are nearly in critical configurations. In this paper, the authors propose an algorithm by which the projective depths are iteratively estimated so that the measurement matrix is made to be as close as possible to rank 4. This estimation process requires no fundamental matrix computation and is therefore robust against measurement noise. Camera motion and shape in 3D projective space are then recovered by factoring the measurement matrix computed from the obtained projective depths. The authors also derive metric constraints for a perspective camera model in the case where the intrinsic camera parameters are available and show that these constraints can be linearly solved for a projective transformation which relates projective and Euclidean descriptions of the scene structure. Using this transformation, the projective motion and shape obtained in the previous factorization step is upgraded to metric descriptions, that is, represented with respect to the Euclidean coordinate frame. The validity of the proposed method is confirmed by experiments with real images.


international conference on robotics and automation | 2009

Clothes state recognition using 3D observed data

Yasuyo Kita; Toshio Ueshiba; Ee Sian Neo; Nobuyuki Kita

In this paper, we propose a deformable-model-driven method to recognize the state of hanging clothes using three-dimensional (3D) observed data. For the task to pick up a specific part of the clothes, it is indispensable to obtain the 3D position and posture of the part. In order to robustly obtain such information from 3D observed data of the clothes, we take a deformable-model-driven approach[4], that recognizes the clothes state by comparing the observed data with candidate shapes which are predicted in advance. To carry out this approach despite large shape variation of the clothes, we propose a two-staged method. First, small number of representative 3D shapes are calculated through physical simulations of hanging the clothes. Then, after observing clothes, each representative shape is deformed so as to fit the observed 3D data better. The consistency between the adjusted shapes and the observed data is checked to select the correct state. Experimental results using actual observations have shown the good prospect of the proposed method.


international conference on pattern recognition | 2006

An Efficient Implementation Technique of Bidirectional Matching for Real-time Trinocular Stereo Vision

Toshio Ueshiba

Bidirectional matching (BM) is an effective technique for area-based binocular stereo vision for maintaining one-to-one correspondence, detecting half-occlusions and discarding false matches. This paper presents an extension of BM to trinocular stereo vision and proposes its memory-efficient implementation that maintains locality of memory access and thus enables the use of SIMD instruction sets of CPU for high time-performance. By using this scheme together with several other implementation techniques, 50fps throughput in generating disparity maps of approx. 320 times 240 sizes has been attained with ordinary PC workstations


ieee-ras international conference on humanoid robots | 2011

Clothes handling based on recognition by strategic observation

Yasuyo Kita; Fumio Kanehiro; Toshio Ueshiba; Nobuyuki Kita

In this paper, we propose a method to recognize clothing shape based on strategic observation during handling. When a robot handles largely deformed objects like clothes, it is important for the robot to recognize a constantly varying shape. Large variation in shape and complex self-occlusion, however, make recognition very difficult. To address these difficulties, we have proposed a model-driven strategy using actions for informative observation and have developed some core methods based on this strategy [1][2][3]. In this paper, we show how these core methods can be used for an actual task that involves handling an item of clothing. In addition to proposing a sequence for this task, basic functions for realizing the sequence are also described. Using a robot, the experimental results demonstrated practical utility of the proposed strategy.


intelligent robots and systems | 2009

A method for handling a specific part of clothing by dual arms

Yasuyo Kita; Toshio Ueshiba; Ee Sian Neo; Nobuyuki Kita

In this paper, we propose a strategy for a dual-arm robot to pick up a specific part of clothing with one hand while holding the item of clothing with its other hand. Due to the large deformability of clothing, the handling requirements differ from those required for the handling of rigid objects. In the case of holding a specific part of clothing, large deformation leads to a large variety of positions and orientations of the target part, requiring flexibility of both visual recognition and motion control. On the other hand, since the clothing can flexibly curve over the hand, a relatively large range of suitable actions is allowed for grasping the clothing. By considering these characteristics, the following three-stage strategy is proposed. First, the state of the clothing is recognized from visual observation of the clothing using a deformable-model [1]. Then, the theoretically optimal position and orientation of the hand for handling a specific part of the clothing is calculated based on the recognition results. Finally, the position and orientation of the hand is modified by considering the executable motion range of the dual arms. Preliminary experimental results using actual observations of a humanoid robot were used to validate the effectiveness of the proposed strategy.


international conference on pattern recognition | 1992

Reconstruction of 3D objects by integration of multiple range data

Yoshihiro Kawai; Toshio Ueshiba; Takashi Yoshimi; Masaki Oshima

A new approach to integrate range data from multiple viewpoints is described. Complicated objects, for example flowers, are observed by a range finder from multiple viewpoints whose relative position is unknown. After simple segmentation, correspondences of regions between two consecutive data are examined. This matching process relies on regions which are not supposed to be influenced by occlusion. Transformation parameters between two data are calculated by referring to the matching result. An iteration process to minimize difference estimates of corresponding regions is devised so that a more accurate result is obtained. Data observed from multiple viewpoints are transferred to standard coordinates and integrated. Results on some objects, for example flowers, show that so far that this method is promising.<<ETX>>


intelligent robots and systems | 2010

Clothes handling using visual recognition in cooperation with actions

Yasuyo Kita; Ee Sian Neo; Toshio Ueshiba; Nobuyuki Kita

In this paper, we propose a method of visual recognition in cooperation with actions for automatic handling of clothing by a robot. Difficulty in visual recognition of clothing largely depends on the observed shape of the clothing. Therefore, strategy of actively making clothing into the shape easier to recognize should be effective. First, after clothing is observed by a trinocular stereo vision system, it is checked whether the observation gives enough information to recognize the clothing shape robustly or not. If not, proper “recognition-aid” actions, such as rotating and/or spreading the clothing, are automatically planned based on the visual analysis of the current shape. After executing the planned action, the clothing is observed again to recognize. The effect of the action of spreading clothes was demonstrated through experimental results using an actual humanoid.


international conference on robotics and automation | 2005

Three Characterizations of 3D Reconstruction Uncertainty with Bounded Error

Benoît Telle; Olivier Stasse; Kazuhito Yokoi; Toshio Ueshiba; Fumiaki Tomita

Considering a stereoscopic visual system, this paper deals with the error involved by a 3D reconstruction process. If image pixels are seen as surfaces instead of points, interval analysis provides bounding boxes in which the reconstructed 3D point lies with certainty. This paper presents a method which refine this bounding box and gives a tighter approximation of the error. Using bisection, and a reprojection test in the image planes, the space in which the 3D reconstructed point may be located is given as an octree. This is achieved through the resolution of a set inversion problem using the SIVIA algorithm. For a lighter manipulation of the result, an englobing ellipsoid is deduced from this approximation. Finally the three models are tested on a recognition process for a humanoid robot.


international conference on image analysis and recognition | 2014

Strategy for Folding Clothing on the Basis of Deformable Models

Yasuyo Kita; Fumio Kanehiro; Toshio Ueshiba; Nobuyuki Kita

In this study, a strategy is given for automatically reshaping an item of clothing from an arbitrary shape into a fixed shape by using its deformable model. The strategy consists of three stages that correspond to the clothing state: unknown (before recognition), unknown to known (recognition), and known (after recognition). In the first stage, a clothing item that is initially placed in an arbitrary shape is picked up and observed after some recognition-aid actions. In the second stage, the clothing state is recognized by matching the deformable clothing model to the observed 3D data [1]. In the third stage, a proper sequence of grasps toward the goal state is selected according to the clothing state. As an instance of this strategy, a folding task was implemented in a humanoid robot. Experimental results using pullovers show that the folding task can be achieved with a small number of grasping steps.


ieee-ras international conference on humanoid robots | 2013

Recognizing clothing states using 3D data observed from multiple directions

Yasuyo Kita; Toshio Ueshiba; Fumio Kanehiro; Nobuyuki Kita

In this paper, we propose a method of recognizing the state of a clothing item by using three-dimensional(3D) data observed from multiple directions in an integrated manner. The situation dealt with in this paper is that a clothing item is observed from different directions by rotating it along a vertical axis. First, sets of 3D points obtained from each observation are integrated on a depth buffer image which lies on the side of a cylinder containing the item (CZ buffer image). Then, CZ buffer is expanded into a new depth buffer image, whose horizontal axis is akin to geodesic distance on the clothing surface (EZ buffer image). As a result, the region where 3D points are stored in EZ buffer images approximates “a view of flattened surface” of the item, which is stable regardless of the variation in 3D shape of the item as far as the item is held at the same position. Experimental results using both synthetic images and actually observed images demonstrated that the similarity of regions in EZ buffer images is an effective measure for recognizing clothing states.

Collaboration


Dive into the Toshio Ueshiba's collaboration.

Top Co-Authors

Avatar

Fumiaki Tomita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Nobuyuki Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuyo Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takashi Yoshimi

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshihiro Kawai

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ee Sian Neo

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Fumio Kanehiro

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masaki Oshima

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuhito Yokoi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yasushi Sumi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge