Yasukazu Okamoto
Toshiba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yasukazu Okamoto.
Pattern Recognition | 1988
Shinichi Tamura; Yasukazu Okamoto; Kenji Yanashima
Abstract We are developing a health screening system for color eye-fundus photography. The system is designed to detect the first signs of adult diseases, for which purpose it is important to detect and trace eye blood vessels. This paper describes a method of finding the papilla by Hough transform, and from there tracing blood vessels by a second-order derivative Gaussian filter. The width of the blood vessel is obtained as the zero-crossing interval of the filter output. The filter is adjustable to the current width of the blood vessel being traced. In this process, since the obtained zero-crossing interval is larger than the true width of an ideal step-wise blood vessel, it is corrected at each step.
ieee intelligent vehicles symposium | 2007
Susumu Kubota; Tsuyoshi Nakano; Yasukazu Okamoto
A fast and robust stereo algorithm for on-board obstacle detection systems is proposed. The proposed method finds the optimum road-obstacle boundary which provides the most consistent interpretation of the input stereo image pair. Global optimization combined with a robust matching measure enables stable detection of obstacles under various circumstances, such as heavy rain and severe lighting conditions. The processing time for VGA size image pair is about 15 msec on a 3.6 GHz pentium IV processor, which is fast enough for realtime applications.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991
Yoshinori Kuno; Yasukazu Okamoto; Satoshi Okada
A robot vision system that automatically generates an object recognition strategy from a 3D model and recognizes the object using this strategy is presented. The appearance of an object from various viewpoints is described in terms of visible 2D features such as parallel lines and ellipses. Features are then ranked according to the number of viewpoints from which they are visible. The rank and feature extraction cost of each feature are used to generate a treelike strategy graph. This graph gives an efficient feature search order when the viewpoint is unknown, starting with commonly occurring features and ending with features specific to a certain viewpoint. The system searches for features in the order indicated by the graph. After detection, the system compares a lines representation generated from the 3D model with the image features to localize the object. Perspective projection is used in the localization process to obtain the precise position and attitude of the object, whereas orthographic projection is used in the strategy generation process to allow symbolic manipulation. Experimental results are given. >
international conference on robotics and automation | 1990
Kazunori Onoguchi; Mutsumi Watanabe; Yasukazu Okamoto; Yoshinori Kuno; Haruo Asada
A visual navigation system for a robot working in a nuclear power plant is presented. Such a system should have two functions. One is to provide a rough environment map or directions towards a goal, such as turning left at the second corner. The other is to avoid obstacles and to find a passage for continuous movement of the robot. The operation of the system is divided into two stages. In the first stage, a multi-information local map (MIL map) describing information necessary for self-location measurement is created interactively from stereo images collected during remote-controlled navigation. The second stage is autonomous navigation. The robot moves autonomously by using the map with the aid of the second function of the system. The system consists of five subsystems: an environment teaching subsystem, to create the MIL map; a self-location measurement subsystem, to find the robots own location; an obstacle detection subsystem; a path planning subsystem; and a path following subsystem. Experimental results from a mobile robot show the usefulness of the system.<<ETX>>
machine vision applications | 1993
Hiroaki Kubota; Yasukazu Okamoto; Hiroshi Mizoguchi; Yoshinori Kuno
This paper proposes a vision processor for moving-object analysis in time-varying images. The process of motion analysis can be divided into three stages: moving-object candidate detection, object tracking, and final motion analysis. The processor Consists of three components corresponding to these three stages. The first isan overall image processing unitwith local parallel architecture. It locates candidate regions for moving objects. The second is a multimicroprocessor system consisting of 16 local modules. Each module tracks one candidate region. The third is the host workstation. In this paper, we describe both the architecture and the software of the vision processor.
international conference on pattern recognition | 2004
Hiroaki Nakai; Nobuyuki Takeda; Hiroshi Hattori; Yasukazu Okamoto; Kazunori Onoguchi
We propose a novel stereo scheme for obstacle detection which is aimed at practical automotive use. The basic methodology involves simple region matching between images, observed from a stereo camera rig, where it is assumed the images are related by a pseudo-projective transform. It provides an effective solution for determining boundaries of obstacles in noisy conditions, e.g. caused by weather or poor illumination, which conventional planar projection approaches cannot cope with. The linearity of the camera model also contributes significantly to compensation of road inclination. Essentially, precise lane detection and prior knowledge concerning obstacles or ambient conditions are unnecessary and the proposed scheme is therefore applicable to a wide variety of outdoor scenes. We have also developed a multi-VLIW processor that fulfills the essential specifications for automotive use. Our scheme for obstacle detection is largely reflected in the processor design so that real-time on-board processing can be realized with acceptable cost to both automobile users and manufacturers. The implementation of a prototype and experimental results illustrates our method.
international conference on computer vision | 1990
Yoshinori Kuno; Yasukazu Okamoto; Satoshi Okada
A vision system is presented which automatically generates an object recognition strategy from a 3D model, and recognizes the object using this strategy. In this system, the appearances of an object from various viewpoints are described with visible 2D features, such as parallel lines and ellipses. Then, the features in the appearances are ranked according to the number of viewpoints from which they are visible. The rank and the feature extraction cost for each feature are considered to generate a tree-like strategy graph. It shows an efficient feature search order when the viewpoint is unknown, starting with commonly occurring features and ending with features specific to a certain viewpoint. The system searches for features in the order indicated by the graph. After detection, the system compares the line representation generated from the 3D model and the image features to localize the object.<<ETX>>
conference of the industrial electronics society | 1990
Yasukazu Okamoto; Yoshinori Kuno; Satoshi Okada
A vision system that automatically generates an object recognition strategy from a 3D model and recognizes the object by this strategy is presented. In this system, the appearances of an object from various view directions are described with 2D features, such as parallel lines and ellipses. These appearances are then ranked, and a tree-like strategy graph is generated. It shows an efficient feature search order when the viewer direction is unknown. The object is recognized by feature detection guided by the strategy. After the features are detected, the system compares the line representation generated from a 3D model and the image features to localize the object. Perspective projection is used in the localization process to obtain the precise position and attitude of the object, while orthographic projection is used in the strategy generation process to allow symbolic manipulation.<<ETX>>
intelligent robots and systems | 1989
Kazunori Onoguchi; Mutsumi Watanabe; Yasukazu Okamoto; Haruo Asada
This paper p r e s e n t s a v i s u a l n a v i g a t i o n system f o r a r o b o t working i n a n u c l e a r power p l a n t . Such a system should have two f u n c t i o n s . One i s t o p r o v i d e a rough environment map o r d i r e c t i o n s towards a g o a l , such a s t u r n i n g l e f t a t t h e second c o r n e r . The o t h e r is t o a v o i d o b s t a c l e s and t o f i n d a passage f o r a cont inuous movement o f t h e r o b o t . The o p e r a t i o n of t h e system is d i v i d e d i n t o two s t a g e s . I n t h e f i r s t s t a g e , a MIL(mu1ti-information l o c a l ) map d e s c r i b i n g t h e s e v e r a l in format ion necessary f o r a s e l f l o c a t i o n measurement i s c r e a t e d i n t e r a c t i v e l y from s t e r e o images c o l l e c t e d d u r i n g remote-cont ro l led n a v i g a t i o n . The second s t a g e i s autonomous n a v i g a t i o n . The r o b o t moves autonomously by u s i n g t h e map w i t h t h e a i d of t h e second f u n c t i o n of t h e system. The system c o n s i s t s of f i v e subsystems : an environment t e a c h i n g subsystem, a s e l f l o c a t i o n measurement subsystem, an o b s t a c l e d e t e c t i o n subsystem, a p a t h p l a n n i n g subsystem, and a p a t h fo l lowing subsystem. The environment t e a c h i n g subsystem c r e a t e s an environment map: t h e s e l f l o c a t i o n measurement subsystem g i v e s t h e l o c a t i o n o f t h e r o b o t by comparing t h e i n p u t image and t h e map i n f o r m a t i o n . The o t h e r t h r e e subsystems s u p p o r t autonomous n a v i g a t i o n . The system h a s been implemented on a mobile r o b o t . Exper imenta l r e s u l t s have shown t h e u s e f u l n e s s of t h e proposed system.
Archive | 1993
Roberto Cipolla; Yasukazu Okamoto; Yoshinori Kuno