Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hidehiko Iwasa is active.

Publication


Featured researches published by Hidehiko Iwasa.


international conference on pattern recognition | 1998

Generation of high-resolution stereo panoramic images by omnidirectional imaging sensor using hexagonal pyramidal mirrors

Takahito Kawanishi; Kazumasa Yamazawa; Hidehiko Iwasa; Haruo Takemura; Naokazu Yokoya

We have developed a high-resolution omnidirectional stereo imaging sensor that can take images at video-rate. The sensor system takes an omnidirectional view by a component constructed of six cameras and a hexagonal pyramidal mirror and acquires stereo views by symmetrically connecting two sensor components. The paper describes a method of generating stereo panoramic images by using our sensor. First, the sensor system is calibrated; that is, twelve cameras are correctly aligned with pyramidal mirrors and the Tsais method restores the radial distortion of each camera image. Stereo panoramic images are then computed by registering the camera images captured at the same time.


Systems and Computers in Japan | 2003

Memory-based self-localization using omnidirectional images

Hidehiko Iwasa; Nobuhiro Aihara; Naokazu Yokoya; Haruo Takemura

For a robot, recognizing its location in a real environment is important in terms of visual navigation, etc. This paper proposes a memory-based self-localization method based on the use of omnidirectional images. In the proposed method, information that is constant relative to rotation around the axis of an omnidirectional image sensor is first extracted by generating autocorrelation images from omnidirectional images containing global information that is useful for self-localization. Next, eigenspaces are formed from the generated autocorrelation images, and self-localization is performed by searching for the learning images that are closest to the input image in the eigenspace. Additionally, performing self-localization in two stages (general and local) results in robust self-localization. Lastly, experiments were conducted using time-series omnidirectional images that were actually captured both indoors and outdoors to verify the effectiveness of the proposed method.


international conference on image analysis and processing | 1999

Content-based similarity retrieval of images based on spatial color distributions

Hidenori Yamamoto; Hidehiko Iwasa; Naokazu Yokoya; Haruo Takemura

The importance of image retrieval systems which search image databases for images that users want to see has been increasing, while advances in computer technology have enabled us to make large image databases easily. This paper proposes a content-based image retrieval (CBIR) system in which an example image is given as a query. One of the most important factors of CBIR using a pictorial query is the design of features which represent the appearance of the query image. To capture the color distribution of images, the proposed method uses multiple local color histograms corresponding to segmented sub-regions, while most previous histogram-based image retrieval systems have used a global single color histogram for an entire image. Segmentation of an image is done by dividing the image into two rectangular regions recursively based on the discriminant analysis. The resulting binary tree structure of regions facilitates the evaluation of similarity among images. The paper also describes two kinds of experiments and results to demonstrate the discriminant ability of our system and to evaluate the consistency of the proposed similarity function and a human sense of similarity of images. Results show that our system can effectively capture spatial color information and retrieve similar appearance images in comparison with the single histogram method. It was also confirmed that the proposed method improves the consistency of the similarity function and a human sense for many query images in the experiment.


Systems and Computers in Japan | 1997

Splitting of active contour models based on crossing detection for extraction of multiple objects

Shoichi Araki; Naokazu Yokoya; Hidehiko Iwasa; Haruo Takemura

With previous active contour models (Snakes), separate extraction of multiple objects inside a contour was impossible, and thus the number of initial contours to be set was the same as that of objects. This paper aims at automatic extraction of multiple objects using Snakes, so that those objects can be extracted separately by means of a single initial contour that includes them all. For this purpose, a splitting contour model is proposed; specifically, a contraction-type Snake is split into multiple contours wherever self-crossing is detected during deformation by means of area term. With the proposed method, smaller contour models generated in splitting are extinguished, so that noise or nonrelevant tiny objects within the image are prevented from capturing; hence, stable extraction of objects is ensured even though the initial contour is placed well apart from the objects. Particularly, multiple objects may be extracted automatically with the initial contour set as the image border. Evaluation tests were carried out to prove the effectiveness of the proposed method, after which the method was applied successfully to actual images, namely, to extracting multiple cells from a microphotograph, and to extracting/tracing multiple objects from moving images.


international conference on pattern recognition | 1996

Facial component extraction by cooperative active nets with global constraints

Ryuji Funayama; Naokazu Yokoya; Hidehiko Iwasa; Haruo Takemura

This paper describes a new method for extracting facial components from a color image using cooperative active nets with global constraints. We utilize an active net model which is a region extraction method based on energy minimization principle. Each net deforms with its own energy being minimized and the position of the nets are controlled by minimizing an additional energy defined by global constraints on their placement. Our method has been experimentally shown to be robust to variations of facial size, position and rotation.


systems man and cybernetics | 1999

Data visualization for supporting query-based data mining

K. Kichiyoshi; Hidehiko Iwasa; Haruo Takemura; Naokazu Yokoya

The paper proposes a methodology for supporting the process of query based data mining by using visualization techniques. The query based data mining is one of the most important tasks of Knowledge Discovery in Databases (KDD). In the process of query based data mining, users hypothesize about patterns in a database and make a query to confirm the hypothesis. The proposed method supports two aspects of the process, i.e., proposing an initial hypothesis as a query and modifying the hypothesis based on the query result. In this method, an instance in a database which has several attributes with numerical or nominal values is visualized as a color bar with several color parts which correspond to attribute values. Values of a function which evaluates the utility of a hypothesis are also visualized by using colors. This visualization technique helps users find an initial hypothesis and modify the hypothesis in order to increase the usefulness of it interactively. Experimental results show that the proposed method really helps a user find interesting rules in real world databases.


international conference on pattern recognition | 2000

A stereo vision-based augmented reality system with a wide range of registration

Masayuki Kanbara; Hidehiko Iwasa; Haruo Takemura; Naokazu Yokoya

Proposes a vision-based augmented reality system with a wide range of registration. To realize an augmented reality system, it is required to geometrically register real and virtual worlds. In the case of a vision-based augmented reality with marker tracking, its measurement range is usually limited because markers placed in the real world should be captured by cameras. The proposed method realizes a stereo vision-based augmented reality system with a wide range of registration by automatically detecting and tracking new, markers that come into sight. The feasibility of the system has been successfully demonstrated through experiments.


The Journal of The Institute of Image Information and Television Engineers | 1996

New Image/Video Media and It's Application. Analysis and Synthesis of Six Primary Facial Expressions Using Cylindrical Data.

Yumiko Tatsuno; Satoshi Suzuki; Naokazu Yokoya; Hidehiko Iwasa; Haruo Takemura

In most conventional approaches to the synthesis of human facial expressions, facial images are generated by manually moving feature points on a face based on the concept of FACS (Facial Action Coding System), primarily with 3D models, such as a wireframe model This paper describes a synthesis-by-analysis approach using range images for producing human facial 3D images with primary expressions First view-independent representations of 3D locations of facial feature points are obtained using an object-centered coordinate system defined on the face Then we quantify feature point locations for the neutral expression and six primary expressions. Applying an image warping technique on both registered range and surface texture images, we finally generate 3D facial expression images from a neutral expression image and motion vectors of facial feature points.


Storage and Retrieval for Image and Video Databases | 1998

Collaborative immersive workspace through a shared augmented environment

Kiyoshi Kiyokawa; Hidehiko Iwasa; Haruo Takemura; Naokazu Yokoya


virtual reality software and technology | 2002

Vlego: a simple two-handed modeling environment based on toy blocks

Kiyoshi Kiyokawa; Haruo Takemura; Yoshiaki Katayama; Hidehiko Iwasa; Naokazu Yokoya

Collaboration


Dive into the Hidehiko Iwasa's collaboration.

Top Co-Authors

Avatar

Naokazu Yokoya

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazumasa Yamazawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masayuki Kanbara

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shoichi Araki

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takahito Kawanishi

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Yoshiaki Katayama

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hidenori Yamamoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hirofumi Fujii

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge