Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naokazu Yokoya is active.

Publication


Featured researches published by Naokazu Yokoya.


Computer Vision and Image Understanding | 1995

A Robust Method for Registration and Segmentation of Multiple Range Images

Takeshi Masuda; Naokazu Yokoya

Registration and segmentation of multiple range images are important problems in range image analysis. We propose a new algorithm of range data registration and segmentation that is robust in the presence of outlying points (outliers) like noise and occlusion. The registration algorithm determines rigid motion parameters from a pair of range images. Our method is an integration of the iterative closest point (ICP) algorithm with random sampling and least median of squares (LMS or LMedS) estimator. The segmentation method classifies the input data points into four categories comprising inliers and 3 types of outliers. Finally, we integrate the inliers obtained from multiple range images to construct a data set representing an entire object. We have experimented with our method both on synthetic range images and on real range images taken by two kinds of rangefinders. The proposed method does not need preliminary processes such as smoothing or trimming of isolated points because of its robustness. It also offers the advantage of reducing the computational cost.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Range image segmentation based on differential geometry: a hybrid approach

Naokazu Yokoya; Martin D. Levine

The authors describe a hybrid approach to the problem of image segmentation in range data analysis, where hybrid refers to a combination of both region- and edge-based considerations. The range image of 3-D objects is divided into surface primitives which are homogeneous in their intrinsic differential geometric properties and do not contain discontinuities in either depth of surface orientation. The method is based on the computation of partial derivatives, obtained by a selective local biquadratic surface fit. Then, by computing the Gaussian and mean curvatures, an initial region-gased segmentation is obtained in the form of a curvature sign map. Two additional initial edge-based segmentations are also computed from the partial derivatives and depth values, namely, jump and roof-edge maps. The three image maps are then combined to produce the final segmentation. Experimental results obtained for both synthetic and real range data of polyhedral and curved objects are given. >


international conference on pattern recognition | 1996

Registration and integration of multiple range images for 3-D model construction

Takeshi Masuda; Katsuhiko Sakaue; Naokazu Yokoya

Registration and integration of measured data of real objects are becoming important in 3D modeling for computer graphics and computer-aided design. We propose a new algorithm of registration and integration of multiple range images for producing a geometric object surface model. The registration algorithm determines a set of rigid motion parameters that register a range image to a given mesh-based geometric model. The algorithm is an integration of the iterative closest point algorithm with the least median of squares estimator. After registration, points in the input range are classified into inliers and outliers according to registration errors between each data point and the model. The outliers are appended to the surface model to be used by registration with the following range images. The parts classified as inlier by at least one registration result is segmented out to be integrated. This process consisting of registration and integration is iterated until all views are integrated. We successfully experimented with the proposed method on real range image sequences taken by a rangefinder. The method does not need any preliminary processes.


Computer Vision and Image Understanding | 1998

Telepresence by Real-Time View-Dependent Image Generation from Omnidirectional Video Streams

Yoshio Onoe; Kazumasa Yamazawa; Haruo Takemura; Naokazu Yokoya

This paper describes a new approach to telepresence which realizes virtual tours into a visualized dynamic real world without significant time delay. We propose a novel concept ofinstantourwhich enables us to instantly look around a visualized space of a dynamic real world. Theinstantouris realized by the following two steps: (1) video-rate omnidirectional image acquisition and (2) real-time view-dependent perspective image generation from an omnidirectional video stream. The proposed technique is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of viewing direction to the change of displayed image does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. The proposed technique is also useful for another type of telepresence which uses recorded omnidirectional video streams. We have developed a prototype of virtualinstantoursystem and have successfully experimented with real dynamic scenes.


Proceedings of 1994 IEEE 2nd CAD-Based Vision Workshop | 1994

A robust method for registration and segmentation of multiple range images

Takeshi Masuda; Naokazu Yokoya

Registration and segmentation of multiple range images are one of the most important problems in range image analysis. This problem has been investigated by a number of researchers, but most of existing methods are easily affected by outlying points (outliers) like noise and occlusion. We first propose a robust method of estimating rigid motion parameters from a pair of range images. This method is an integration of the iterative closest point (ICP) algorithm with the random sampling and the least median of squares (LMS) estimator. We then detect the outliers by thresholding the residuals in the LMS estimation, and finally we classify each pixel into one of five categories to obtain a segmentation. We experimented on real range images taken by two kinds of rangefinders, and observed that our method worked successfully even for noisy data. The proposed method has another advantage of reducing the computational cost.<<ETX>>


international conference on pattern recognition | 1998

Generation of high-resolution stereo panoramic images by omnidirectional imaging sensor using hexagonal pyramidal mirrors

Takahito Kawanishi; Kazumasa Yamazawa; Hidehiko Iwasa; Haruo Takemura; Naokazu Yokoya

We have developed a high-resolution omnidirectional stereo imaging sensor that can take images at video-rate. The sensor system takes an omnidirectional view by a component constructed of six cameras and a hexagonal pyramidal mirror and acquires stereo views by symmetrically connecting two sensor components. The paper describes a method of generating stereo panoramic images by using our sensor. First, the sensor system is calibrated; that is, twelve cameras are correctly aligned with pyramidal mirrors and the Tsais method restores the radial distortion of each camera image. Stereo panoramic images are then computed by registering the camera images captured at the same time.


international conference on pattern recognition | 1998

Visual surveillance and monitoring system using an omnidirectional video camera

Yoshio Onoe; Naokazu Yokoya; Kazumasa Yamazawa; Haruo Takemura

This paper describes a visual surveillance and monitoring system which is based on omnidirectional imaging and view-dependent image generation from omnidirectional video streams. While conventional visual surveillance and monitoring systems usually consist of either a number of fixed regular cameras or a mechanically controlled camera, the proposed system has a single omnidirectional video camera using a hyperboloidal mirror. This approach has an advantage of less latency in looking around a large field of view. In a prototype system developed, the viewing direction is determined by viewers head tracking, by using a mouse, or by moving object trading in the omnidirectional image.


international conference on pattern recognition | 1998

Real-time tracking of multiple moving objects in moving camera image sequences using robust statistics

Shoichi Araki; Takashi Matsuoka; Haruo Takemura; Naokazu Yokoya

In this paper, we propose a new method for detection and tracking of moving objects from a moving camera image sequence using robust statistics and active contour models. We assume that the apparent background motion between two consecutive image frames can be approximated by affine transformation. In order to register the static background, we estimate affine transformation parameters using LMedS (least median of squares) method which is a kind of robust statistics. Split-and-merge contour models are employed for tracking multiple moving objects which have been recently proposed by the authors. Image energy of contour models is defined based on the image which is obtained by subtracting the previous frame transformed with estimated affine parameters from the current frame. We have implemented the method on an image processing system which consists of DSP boards for real-time tracking of moving objects from a moving camera image sequence.


Systems and Computers in Japan | 2003

Memory-based self-localization using omnidirectional images

Hidehiko Iwasa; Nobuhiro Aihara; Naokazu Yokoya; Haruo Takemura

For a robot, recognizing its location in a real environment is important in terms of visual navigation, etc. This paper proposes a memory-based self-localization method based on the use of omnidirectional images. In the proposed method, information that is constant relative to rotation around the axis of an omnidirectional image sensor is first extracted by generating autocorrelation images from omnidirectional images containing global information that is useful for self-localization. Next, eigenspaces are formed from the generated autocorrelation images, and self-localization is performed by searching for the learning images that are closest to the input image in the eigenspace. Additionally, performing self-localization in two stages (general and local) results in robust self-localization. Lastly, experiments were conducted using time-series omnidirectional images that were actually captured both indoors and outdoors to verify the effectiveness of the proposed method.


international conference on pattern recognition | 1996

Data embedding into pictorial images with less distortion using discrete cosine transform

Takeshi Ogihara; Daisuke Nakamura; Naokazu Yokoya

The style of cryptography in which important data are embedded into image data not to be noticed by outsiders is called video steganography. This paper proposes a new method for embedding data into pictorial images. In this method, the amount of locally embedded data is adaptively varied according to the local characteristics of the original image, and more data will be embedded into the image with less distortion. Using the discrete cosine transform (DCT), data are mainly embedded into high frequency components of the image, which have little influence on the quality of the image. The amount of embedded data is varied depending on DCT components of adjacent areas not to weaken edge lines in the image. We also show the effectiveness of our method by subjective evaluation.

Collaboration


Dive into the Naokazu Yokoya's collaboration.

Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hidehiko Iwasa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kazumasa Yamazawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masayuki Kanbara

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takashi Okuma

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takeshi Masuda

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshio Onoe

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge