Masaki Oshima
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Masaki Oshima.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1983
Masaki Oshima; Yoshiaki Shirai
This paper describes an approach to the recognition of stacked objects with planar and curved surfaces. The system works in two phases. In the learning phase, a scene containing a single object is shown one at a time. The range data of a scene are obtained by a range finder. The description of each scene is built in terms of properties of regions and relations between them. This description is stored as an object model. In the recognition phase, an unknown scene is described in the same way as in the learning phase. Then the description is matched to the object models so that the stacked objects are recognized sequentially. Efficient matching is achieved by a combination of data-driven and model-driven search processes. Experimental results for blocks and machine parts are shown.
Pattern Recognition | 1979
Masaki Oshima; Yoshiaki Shirai
Abstract This paper presents a method for describing scenes with polyhedra and curved objects from three-dimensional data obtained by a range finder. A scene is divided into many surface elements consisting of several data points. The surface elements are merged together into regions. The regions are classified into three classes: plane, curved and undefined. The program extends the curved regions by merging adjacent curved and undefined regions. Thus the scene is described by plane regions and smoothly curved regions, which might be useful for the recognition of the objects. From the results obtained so far the program seems to achieve the desired goals.
international conference on pattern recognition | 1992
Yoshihiro Kawai; Toshio Ueshiba; Takashi Yoshimi; Masaki Oshima
A new approach to integrate range data from multiple viewpoints is described. Complicated objects, for example flowers, are observed by a range finder from multiple viewpoints whose relative position is unknown. After simple segmentation, correspondences of regions between two consecutive data are examined. This matching process relies on regions which are not supposed to be influenced by occlusion. Transformation parameters between two data are calculated by referring to the matching result. An iteration process to minimize difference estimates of corresponding regions is devised so that a more accurate result is obtained. Data observed from multiple viewpoints are transferred to standard coordinates and integrated. Results on some objects, for example flowers, show that so far that this method is promising.<<ETX>>
Archive | 1987
Masaki Oshima; Yoshiaki Shirai
This paper describes an approach to the recognition of stacked objects with planar and curved surfaces. The system works in two phases. In the learning phase, a scene containing a single object is shown one at a time. The range data of a scene are obtained by a range finder. The data points are grouped into many surface elements consisting of several points. The surface elements are merged together into regions. The regions are classified into three classes: plane, curved and undefined. The program extends the curved regions by merging adjacent curved and undefined regions. Thus the scene is represented by plane regions and smoothly curved regions. The description of each scene is built in terms of properties of regions and relations between them. This description is stored as an object model. In the recognition phase, an unknown scene is described in the same way as in the learning phase. Then the description is matched to the object models so that the stacked objects are recognized sequentially. Efficient matching is achieved by a combination of data-driven and model-driven search processes. Experimental results for blocks and machine parts are shown.
Computers & Graphics | 1983
Yoshiaki Shirai; Kazutada Koshikawa; Masaki Oshima; Katsushi Ikeuchi
Abstract A flexible computer vision is described which is able to recognize 3-D objects if their models are given. The models can be built in a CAD process using a geometric modeler GEOMAP. Three cases are studied: surface normals are available, a part of surface normals are available, and range data is available. In order to perform efficient matching, three methods are proposed for those cases: use of EGI representation, use of relative angles between surface normals, and use of kernels.
intelligent robots and systems | 1995
Akira Nakamura; Hideo Tsukune; Tsukasa Ogasawara; Masaki Oshima
Geometric modeling of the environment is important in robot motion planning. Generally, shapes can be stored in a database, so the elements that need to be decided are positions and orientations. In this paper surface-based geometric modeling using a teaching tree is proposed. In this modeling method, combinations of surfaces are considered in order to decide the positions and orientations of the object. The combinations are represented by a depth-first tree, which makes it easy for the operator to select one combination out of several. This method is effective not only in the case when perfect data can be obtained but also when conditions for measurement of three-dimensional data are unfavorable which often is the case in the environment of a working robot.
intelligent robots and systems | 1996
Akira Nakamura; Tsukasa Ogasawara; Hideo Tsukune; Masaki Oshima
In robot motion planning, geometric modeling plays an important role. Generally the shapes of objects such as factory products can be stored in a computer software database, so the elements that need to be decided are positions and orientations. In this paper surface-based geometric modeling using a task-oriented teaching tree is proposed. In this modeling, combinations of surfaces for deciding positions and orientations of objects are represented in a depth-first tree based on a kind of task. Therefore, the operator can choose easily one combination out of several compared with the obtained data. Moreover a geometric model of the object suited to the manipulation task can be obtained.
international conference on advanced robotics | 1997
Akira Nakamura; Tsukasa Ogasawara; Hideo Tsukune; Masaki Oshima
In motion planning of robots, geometric modeling plays an important role. Generally, the shapes of objects such as factory products can be stored in a database, so the elements that need to be decided are positions and orientations. In this paper, surface-based geometric modeling using teaching trees in consideration of backprojection is proposed. In this modeling, teaching trees representing combinations of surfaces for deciding positions and orientations are used. The operator can choose easily one combination out of several compared with the obtained data. Moreover the teaching trees are depth-first based on backprojection which is a method of motion planning, so a geometric model suited to robot motion can be obtained.
Archive | 2001
Yessy Arvelyna; Masaki Oshima; Agus Kristijono; Iwan Gunawan
international joint conference on artificial intelligence | 1981
Masaki Oshima; Yoshiaki Shirai
Collaboration
Dive into the Masaki Oshima's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs