Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saburo Tsuji is active.

Publication


Featured researches published by Saburo Tsuji.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Omni-directional stereo

Hiroshi Ishiguro; Masashi Yamamoto; Saburo Tsuji

Omnidirectional views of an indoor environment at different locations are integrated into a global map. A single camera swiveling about the vertical axis takes consecutive images and arranges them into a panoramic representation, which provides rich information around the observation point: a precise omnidirectional view of the environment and coarse ranges to objects in it. Using the coarse map, the system autonomously plans consecutive observations at the intersections of lines connecting object points, where the directions of the imaging are estimated easily and precisely. From two panoramic views at the two planned locations, a modified binocular stereo method yields a more precise, but with direction-dependent uncertainties, local map. New observation points are selected to decrease the uncertainty, and another local map is yielded, which is then integrated into a more reliable global representation of the world with the adjacent local maps. >


International Journal of Computer Vision | 1992

Panoramic representation for route recognition by a mobile robot

Jiang Yu Zheng; Saburo Tsuji

Here, we explore a new theme: Route recognition, in robot navigation. It is faced with problems of visual sensing, spatial memory construction, and scene recognition in a global world. The strategy of this work is route description from experience, that is, a robot acquires a route description from route views taken in a trial move, and then uses it to guide the navigation along the same route. In cognition phase, a new representation of scenes along a route termed panoramic representation is proposed. This representation is obtained by scanning sideviews along the route, which provides rich information such as 2D projections of scenes called Panoramic view and generalized panoramic view, a path-oriented 2 1/2D sketch, and a path description, but only contains a small amount of data. The continuous panoramic view (PV) and generalized panoramic view (GPV) are efficient in processing, compared with fusing discrete views into a complete route model. In recognition phase, the robot matches the panoramic representation memorized in the trial move and that from incoming images so that it can locate and orient itself. We employ dynamic programming and circular dynamic programming in coarse matching of GPVs and PVs, and employ feature matching in fine verification. The advantage of wide fields of GPV and PV brings a reliable result to the scene recognition.


international conference on robotics and automation | 1994

Real-time omnidirectional image sensor (COPIS) for vision-guided navigation

Yasushi Yagi; Shinjiro Kawato; Saburo Tsuji

Describes a conic projection image sensor (COPIS) and its application: navigating a mobile robot in a manner that avoids collisions with objects approaching from any direction. The COPIS system acquires an omnidirectional view around the robot, in real-time, with use of a conic mirror. Based on the assumption of constant linear motion of the robot and objects, the objects moving along collision paths are detected by monitoring azimuth changes. Confronted with such objects, the robot changes velocity to avoid collision and determines locations and velocities. >


international conference on pattern recognition | 1994

Understanding human motion patterns

Yan Guo; Gang Xu; Saburo Tsuji

This paper addresses the recognition of human motion patterns. We represent the human body structure in the silhouette by a stick figure model. The human motion, thus, can be recorded as a sequence of the stick figure parameters, which can be used as input of a motion pattern analyzer. The recognition of human motion pattern is divided into two stages. In the first stage, a model-driven approach is used to track human motions. This is, in fact, finding the stick figure model which represents the human silhouette in each frame. In the second stage, a BP neural network classifies motions of the stick figures into three categories: walking, running and other motions. We transform the time sequence of stick figure parameters into Fourier domain by DFT, and use only the first four Fourier components as the input of the neural network.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1981

Automatic Analysis of Moving Images

Masahiko Yachida; Minoru Asada; Saburo Tsuji

Cine film and videotape are used to record a variety of natural processes in biology, medicine, meteorology, etc. This paper describes a system which detects and tracks moving objects from these records to obtain meaningful measures of their movements, such as linear and angular velocities. Features of the system are as follows. 1) In order to detect moving objects that are usually blurred, temporal differences of gray values (differences between consecutive frames) are used to separate moving objects from stationary objects, in addition to spatial differences of gray values. 2) The results of previous frames are used to guide feature extraction process of the next frame so that efficient processing of moving pictures which consists of a large number of frames is possible. 3) Uncertain parts in the current frame, such as occluded objects, are deduced using information of previous frames. 4) Misinterpreted or unknown parts in previous frames are reanalyzed using the results of later frames where those parts could be found.


international conference on pattern recognition | 1990

Panoramic representation of scenes for route understanding

Jiang Yu Zheng; Saburo Tsuji

A dynamically generated panoramic representation for route recognition by a mobile robot is presented. The strategies employed are route description from experience and route recognition by visual information. In the description phase, panoramic representation, a representation of scenes along a route, is proposed. It is obtained by scanning scenes sideways along the route, which provides rich information, such as a 2-D projection of scenes called panoramic view, a path-oriented 2-1/2-D sketch, and a path description. The continuous panoramic view is more efficient in processing than integrating discrete views into a complete route model. In the recognition phase, the robot matches the panoramic representation from incoming images with that memorized in the previous scan so that it can locate and orient itself in autonomous navigation. Since the panoramic view covers a wide field of view, one can achieve reliable matching using a coarse-to-fine method, starting from a very coarse level.<<ETX>>


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1988

Determining surface orientation by projecting a stripe pattern

Minoru Asada; Hidetoshi Ichikawa; Saburo Tsuji

A method is presented for determining the surface orientations of an object by projecting a stripe pattern on to it. Assuming orthographical projection as a camera model and parallel light projection of the stripe pattern, the method obtains a 2 1/2-D representation of objects by estimating surface normals from the slopes and intervals of the stripes in the image. The 2 1/2-D image is further divided into planar or singly curved surfaces by examining the distribution of the surface normals in gradient space. A simple application to finding a planar surface and determining its orientation and shape is shown. The error in surface orientation is discussed. >


Pattern Recognition | 1994

Robust active contours with insensitive parameters

Gang Xu; Eigo Segawa; Saburo Tsuji

Abstract Active contours, known as snakes, have found wide applications since their first introduction in 1987 by Kass et al. ( Int. J. Comput. Vision 1, 321–331). However, one problem with the current models is that the performance depends on proper internal parameters and initial contour position, which, unfortunately, cannot be determined a priori. It is usually a hard job to tune internal parameters and initial contour position. The problem comes from the fact that the internal normal force at each point of contour is also a function of contour shape. To solve this problem, we propose to compensate for this internal normal force so as to make it independent of shape. As a result, the new model works robustly with no necessity to fine-tune internal parameters, and can converge to high curvature points like corners.


intelligent robots and systems | 1996

Image-based memory of environment

Hiroshi Ishiguro; Saburo Tsuji

This paper describes now an intelligent agent can locate itself from iconic memory of scenes. Omnidirectional images of the environment at a number of reference points are memorized as signs of the local areas. Similarity of the omnidirectional views at different locations in a small area decreases with their distance. The agent can thereby find the current local area by examining the similarities of the current view with those memorized already. Since the omnidirectional view are represented by their Fourier co-efficients, cost for memorizing and matching a large number of the views is inexpensive. When the agent explores a new environment, it wanders around, memorizes the views at different locations and organizes them into the iconic memory so as to reflect the environment geometry, which is effective to guide navigation. Results of experiments in an office cluttered with much furniture and objects indicate the agent can reliably locate itself in it from the memorized views.


international conference on computer vision | 1990

Omni-directional stereo for making global map

Hiroshi Ishiguro; Masashi Yamamoto; Saburo Tsuji

A novel imaging method is presented for acquiring an omni-directional view with range information using a single camera. The difficult correspondence problem is solved by tracking each feature in the image sequence in the same manner as in epipolar plane image analysis. Although the equivalent camera distance is not long, the authors can obtain reliable estimates because of the high resolution of the panoramic views. The authors expect the resolution in locating sharp edges to be very fine, up to the resolution of camera rotation, 0.005 degree. A global map making procedure is also proposed which uses omni-directional views at different locations. Omni-directional binocular stereo is used to acquire a path-centered local map with direction-dependent uncertainty. By combining multiple local maps, it is possible to build a more reliable global map. Also explored is the possibility for using the panoramic stereo for map making by a robot.<<ETX>>

Collaboration


Dive into the Saburo Tsuji's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge