Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsuyoshi Yamamura is active.

Publication


Featured researches published by Tsuyoshi Yamamura.


european conference on computer vision | 1996

Separating Real and Virtual Objects from Their Overlapping Images

Noboru Ohnishi; Kenji Kumaki; Tsuyoshi Yamamura; Toshimitsu Tanaka

We often see scenes where an objects virtual image is reflected on window glass and overlaps with an image of another object behind the glass. This paper proposes a method for separating real and virtual objects from the overlapping images. Our method is based on the optical property that light reflected on glass is polarized, while light transmitted through glass is less polarized. It is possible to eliminate reflected light with a polarizing filter. The polarization direction, however, changes even for planar glass and is not easily determined without information about the position and orientation of the glass and objects relative to the camera. Our method uses a series of images obtained by rotating a polarizing filter. Real objects are separated by selecting the minimum image intensity among a series of images for each pixel. The virtual image of objects is obtained by subtracting the image of the real objects from the image having the maximum image intensity among a series of images for each pixel. We present experiments with actual scenes to demonstrate the effectiveness of the proposed method.


international conference on pattern recognition | 2002

Unified approach to image distortion

Toru Tamaki; Tsuyoshi Yamamura; Noboru Ohnishi

We propose a new unified approach to deal with two-formulations of image distortion and a method for estimating the distortion parameters by using the both formulations; the two formulations have been developed separately. The proposed method is based on image registration and consists of nonlinear optimization to estimate parameters including view change and radial distortion. Experimental results demonstrate that our approach works well for both formulations.


intelligent robots and systems | 1999

Character-based mobile robot navigation

Yongmei Liu; Toshimitsu Tanaka; Tsuyoshi Yamamura; Noboru Ohnishi

We describe a method for navigating a mobile robot in the environment using signboards in scenes as landmarks. This method will enable us to realize human-like navigation and map-generation, and also can benefit human-robot communication. We choose signboards on walls and doors as landmarks. An environment map that contains the approximate position of each signboard in the environment is generated by the robot when the robot explores the environment the first time, and it will be used for re-localizing and navigating the robot afterward. First, characters in scene images are detected by using several heuristics of characters and character lines. Then, we calculate the relative orientation of the camera and a detected signboard from a single view using two methods based on the geometry of perspective projection. Finally, we reconstruct a distorted signboard image using two corresponding methods, and this makes the recognition of characters under a wider range of viewing condition much easier.


asian conference on computer vision | 1998

Detecting Characters in Grey-Scale Scene Images

Yongmei Liu; Tsuyoshi Yamamura; Noboru Ohnishi; Noboru Sugie

This paper proposes a method for detecting characters in grey-scale scene images for navigating vision-based mobile robot by character information. First, we extract subregions with high spatial frequency and great variance in grey-level from an input scene image as candidates of character components. Then, we select characters by using several heuristics such as constraints of size and shape, bimodality of an intensity histogram, alignment and proximity of characters. We conducted an experiment using 20 indoor and 40 outdoor images. As a result, character lines are detected with a high rate of 80%.


international conference on information fusion | 2002

Relating audio-visual events caused by multiple movements: in the case of entire object movement

Jinji Chen; Toshiharu Mukai; Yoshinori Takeuchi; Tetsuya Matsumoto; Hiroaki Kudo; Tsuyoshi Yamamura; Noboru Ohnishi

Relating audio-visual events is important for constructing for an artificial intelligent system, which can acquire the audio-visual knowledge of movement through active observation without teaching. This paper proposes a method for relating multiple audiovisual events observed by a camera and a microphone according to general laws without object-specific knowledge (including the case of entire object movement). As corresponding cues, we use Gestalts grouping law; simultaneity of the occurrence of the sound and the change in movement or the same motion starting, similarity of repetition between sound and movement. Based on the correlation coefficient between auditory and visual sequence, the component of frequency at sound onset is related to the short-term space-time invariants (STSTI) of movement. We experimented in the real environment and obtained satisfactory results showing the effectiveness of the proposed method.


systems man and cybernetics | 2000

Finding correspondence between visual and auditory events based on perceptual grouping laws across different modalities

Jinji Chen; Toshiharu Mukai; Yoshinori Takeuchi; Hiroaki Kudo; Tsuyoshi Yamamura; Noboru Ohnishi

A human being understands the environment by integrating information obtained by sight, hearing and touch. To integrate information across different senses, a human being must find the correspondence of events observed by different senses. The paper seeks to relate the audio-visual events caused by more than one movement according to general physical laws without object-specific knowledge. As corresponding cues, we use Gestalts grouping law; simultaneity of the occurrence of sound and change in movement, similarity of time variation between sound and movement, etc. We conducted experiments in a real environment and obtained satisfactory results showing the effectiveness of the proposed method.


The Journal of The Institute of Image Information and Television Engineers | 2001

An Augmented Method For Finding Character Lines From a Gray Scene Image

Yongmei Liu; Tsuyoshi Yamamura; Toshimitsu Tanaka; Noboru Ohnishi

An augmented method for finding character lines in a gray scene image is proposed. In the proposed approach, we use several heuristics of both characters (such as size, symmetry of pixels and bimodality of intensity histogram) and character lines (such as proximity of characters and alignment of arrangement) to discriminate characters from other objects in a scene image. Experimental results indicated that the method performed well for signboards imaged from a relatively wide range of viewing directions.


Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision | 2000

Identification and distortion rectification of signboards in real scene image for robot navigation

Yongmei Liu; Tsuyoshi Yamamura; Toshimitsu Tanaka; Noboru Ohnishi

This paper describes an approach for finding a signboard along with the camera viewing direction in a gray scene image and rectifying a distorted signboard image taken from a nonorthogonal viewing direction. First, we use heuristics of both characters and character lines to discriminate characters from other objects in a gray scene image. Then, we attempt to compute the orientation of the camera relative to a detected signboard from a single view. Finally, we rectify the distortion of signboard images that are viewed at an angle. The only restriction for viewing direction estimation and distortion rectification is that a detected signboard is rectangular. We evaluate our methods on both simulation images and real scene images. Experimental results demonstrate the effectiveness of the proposed methods.


systems man and cybernetics | 1999

Modeling dynamical grouping process

Tsuyoshi Yamamura; Hiroaki Kudo; Noboru Ohnishi; Noboru Sugie

The mechanism underlying perceptual grouping of visual stimuli is not static, but dynamic. We have proposed a neural network model of the dynamical grouping process before, which consists of a 2D array of generalized flip flops. In the simulation, however, we assumed that one generalized flip flop corresponded to one scene item. From a cognitive standpoint, one generalized flip flop should correspond to one specific local visual area. We array generalized flip flops 2-dimensionally so that each corresponds to a specific local visual area, and connect them to one another to construct a network. We call the network a cooperative network. It was applied to sample figures consisting of dots and the time course of grouping was observed. The dynamical grouping process was successfully simulated by the proposed network.


visual communications and image processing | 1998

Online elimination of reflected images to generate high-quality images

Noboru Ohnishi; Masaki Iwase; Tsuyoshi Yamamura; Toshimitsu Tanaka

We often see scenes where an objects image is reflected on window glass and overlaps with a image of another object behind the glass. This paper proposes on-line methods for eliminating images reflected specularly on a smooth surface such as glass and plastic. Our methods are based on the optical property that light reflected on glass is polarized, while light transmitted through glass is less polarized. It is possible to eliminate reflected light with a polarizing filter. The polarization direction, however, changes even for planar glass and is not easily determined without information about the position and orientation of the glass and objects relative to the camera. Our method uses a series of images obtained by rotating a polarizing filter placed in front of a camera. Reflected images are removed by selecting just minimum image intensity among a series of images for each pixel. We propose two methods for estimating minimum image; one is min- max method and the other parameter estimation method. We conducted experiments with real images and compared the performances of the two methods. As a result, we could generate high quality image without reflected images at semi- video rate of 15 frames per second.

Collaboration


Dive into the Tsuyoshi Yamamura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge