Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuichi Ohta is active.

Publication


Featured researches published by Yuichi Ohta.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1985

Stereo by Intra- and Inter-Scanline Search Using Dynamic Programming

Yuichi Ohta; Takeo Kanade

This paper presents a stereo matching algorithm using the dynamic programming technique. The stereo matching problem, that is, obtaining a correspondence between right and left images, can be cast as a search problem. When a pair of stereo images is rectified, pairs of corresponding points can be searched for within the same scanlines. We call this search intra-scanline search. This intra-scanline search can be treated as the problem of finding a matching path on a two-dimensional (2D) search plane whose axes are the right and left scanlines. Vertically connected edges in the images provide consistency constraints across the 2D search planes. Inter-scanline search in a three-dimensional (3D) search space, which is a stack of the 2D search planes, is needed to utilize this constraint. Our stereo matching algorithm uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanline search for correspondences of edge-delimited intervals on each scanline pair. Dynamic programming is used for both searches which proceed simultaneously: the former supplies the consistency constraint to the latter while the latter supplies the matching score to the former. An interval-based similarity metric is used to compute the score. The algorithm has been tested with different types of images including urban aerial images, synthesized images, and block scenes, and its computational requirement has been discussed.


Computer Graphics and Image Processing | 1980

Color information for region segmentation

Yuichi Ohta; Takeo Kanade; Toshiyuki Sakai

Abstract In color image processing various kinds of color features can be calculated from the tristimuli R, G, and B. We attempt to derive a set of effective color features by systematic experiments of region segmentation. An Ohlander-type segmentation algorithm by recursive thresholding is employed as a tool for the experiment. At each step of segmenting a region, new color features are calculated for the pixels in that region by the Karhunen Loeve transformation of R, G, and B data. By analyzing more than 100 color features which are thus obtained during segmenting eight kinds of color pictures, we have found that a set of color features, (R + G + B) 3 , R − B , and (2G − R − B) 2 , are effective. These three features are significant in this order and in many cases a good segmentation can be achieved by using only the first two. The effectiveness of our color feature set is discussed by a comparative study with various other sets of color features which are commonly used in image analysis. The comparison is performed in terms of both the quality of segmentation results and the calculation involved in transforming data of R, G, and B to other forms.


computer vision and pattern recognition | 1996

Occlusion detectable stereo-occlusion patterns in camera matrix

Yuichi Nakamura; Tomohiko Matsuura; Kiyohide Satoh; Yuichi Ohta

In stereo algorithms with more than two cameras, the improvement of accuracy is often reported since they are robust against noise. However, another important aspect of the polynocular stereo, that is the ability of occlusion detection, has been paid less attention. We intensively analyzed the occlusion in the camera matrix stereo (SEA) and developed a simple but effective method to detect the presence of occlusion and to eliminate its effect in the correspondence search. By considering several statistics on the occlusion and the accuracy in the SEA, we derived a few base masks which represent occlusion patterns and are effective for the detection of occlusion. Several experiments using typical indoor scenes showed quite good performance to obtain dense and accurate depth maps even at the occluding boundaries of objects.


international symposium on mixed and augmented reality | 2003

Live mixed-reality 3D video in soccer stadium

Takayoshi Koyama; Itaru Kitahara; Yuichi Ohta

This paper proposes a method to realize a 3D video display system that can capture video from multiple cameras, reconstruct 3D models and transmit 3D video data in real time. We represent a target object with a simplified 3D model consisting of a single plane and a 2D texture extracted from multiple cameras. This 3D model is simple enough to be transmitted via a network. We have developed a prototype system that can capture multiple videos, reconstruct 3D models, transmit the models via a network, and display 3D video in real time. A 3D video of a typical soccer scene that includes a dozen players was processed at 26 frames per second.


international conference on computer vision | 1990

An approach to color constancy using multiple images

Masato Tsukada; Yuichi Ohta

A novel computational algorithm is proposed for color constancy suitable to robot vision. A robot, or a computer, can exactly memorize image information observed in the past. Then it is natural to use more than one image to achieve color constancy. In the algorithm, it is possible to recover the illumination color and the reflectance color only based on the RGB values of two objects identified on two images. It requires no specific assumption on the scene. Experiments show the validity of the proposed algorithm.<<ETX>>


international symposium on mixed and augmented reality | 2004

Outdoor see-through vision utilizing surveillance cameras

Yoshinari Kameda; Taisuke Takemasa; Yuichi Ohta

This paper presents a new outdoor mixed-reality system designed for people who carry a camera-attached small handy device in an outdoor scene where a number of surveillance cameras are embedded. We propose a new functionality in outdoor mixed reality that the handy device can display live status of invisible areas hidden by some structures such as buildings, walls, etc. The function is implemented on a camera-attached, small handy subnotebook PC (HPC). The videos of the invisible areas are taken by surveillance cameras and they are precisely overlapped on the video of HPC camera, hence a user can notice objects in the invisible areas and see directly what the objects do. We utilize surveillance cameras for two purposes. (1) They obtain videos of invisible areas. The videos are trimmed and warped so as to impose them into the video of the HPC camera. (2) They are also used for updating textures of calibration markers in order to handle possible texture changes in real outdoor world. We have implemented a preliminary system with four surveillance cameras and proved that our system can visualize invisible areas in real time.


international conference on computer graphics and interactive techniques | 2006

A nested marker for augmented reality

Keisuke Tateno; Itaru Kitahara; Yuichi Ohta

A Nested Marker, a novel visual marker for camera calibration in augmented reality (AR), enables accurate calibration even when the observer is moving very close to or far away from the marker. Our proposed Nested Marker has a recursive layered structure. One marker at an upper layer contains four smaller markers at the lower layer. Smaller markers can also have lower-layer markers nesting inside them. Each marker can be identified by its inside pattern, so the system can select a proper calibration parameter set for the marker. When the observer views the marker close-up, the lowest layer marker will work. When the observer views the marker from a distance, the top-layer marker will work. It is also possible to simultaneously utilize all visible markers in different layers for more stable calibration. Note that Nested Marker can be used in a standard ARToolkit framework. We have also developed an AR system to demonstrate the ability of Nested Marker


international conference on multimedia computing and systems | 1996

3D image display with motion parallax by camera matrix stereo

Kiyohide Satoh; Itaru Kitahara; Yuichi Ohta

We propose a 3D image display system which can present real scenes with realistic motion parallax. In the sensing system, a scene is observed by using a camera matrix. An excellent stereo algorithm SEA which utilizes 3/spl times/3 image matrix recovers the depth information of the scene with the density and the sharpness required for high quality image generation. In the display system, following the viewing position of the observer, 2D images with proper notion parallax are generated and presented. A novel algorithm to determine the view parameters suitable for reproducing the motion parallax on a fixed screen and an image generation algorithm which can cope with arbitrary viewing positions are described. A prototype system has been developed to demonstrate the feasibility of the proposed algorithms.


International Journal of Computer Vision | 2007

Live 3D Video in Soccer Stadium

Yuichi Ohta; Itaru Kitahara; Yoshinari Kameda; Hiroyuki Ishikawa; Takayoshi Koyama

This paper proposes a method to realize a 3D video system that can capture video data from multiple cameras, reconstruct 3D models, transmit 3D video streams via the network, and display them on remote PCs. All processes are done in real time. We represent a player with a simplified 3D model consisting of a single plane and a live video texture extracted from multiple cameras. This 3D model is simple enough to be transmitted via a network. A prototype system has been developed and tested at actual soccer stadiums. A 3D video of a typical soccer scene, which includes more than a dozen players, was processed at video rate and transmitted to remote PCs through the internet at 15–24 frames per second.


Presence: Teleoperators & Virtual Environments | 2002

Share-Z: client/server depth sensing for see-through head-mounted displays

Yuichi Ohta; Yasuyuki Sugaya; Hiroki Igarashi; Toshikazu Ohtsuki; Kaito Taguchi

In mixed reality, occlusions and shadows are important to realize a natural fusion between the real and virtual worlds. In order to achieve this, it is necessary to acquire dense depth information of the real world from the observers viewing position. The depth sensor must be attached to the see-through HMD of the observer because he/she moves around. The sensor should be small and light enough to be attached to the HMD and should be able to produce a reliable dense depth map at video rate. Unfortunately, however, no such depth sensors are available. We propose a client/server depth-sensing scheme to solve this problem. A server sensor located at a fixed position in the real world acquires the 3-D information of the world, and a client sensor attached to each observer produces the depth map from his/her viewing position using the 3-D information supplied from the server. Multiple clients can share the 3-D information of the server; we call it Share-Z. In this paper, the concept and merits of Share-Z are discussed. An experimental system developed to demonstrate the feasibility of Share-Z is also described.

Collaboration


Dive into the Yuichi Ohta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Motoyuki Ozeki

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuhiro Mukaigawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge