Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomohiro Yendo is active.

Publication


Featured researches published by Tomohiro Yendo.


Signal Processing-image Communication | 2009

View generation with 3D warping using depth information for FTV

Yuji Mori; Norishige Fukushima; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto

Free viewpoint images can be generated from multi-view images using Ray-Space method. Ray-Space data requires ray interpolation so as to satisfy the plenoptic function. Ray interpolation is realized by estimating view-dependent depth. Depth estimation is usually costly process, thus it is desirable that this process is skipped from rendering process to achieve real-time rendering. This paper proposes a method to render a novel view image using multi-view images and depth maps which are computed in advance. Virtual viewpoint image is generated by 3D warping, which causes some problems that have not occurred in the method with view dependent depth estimation. We handled these problems by projecting depth map to virtual image plane first and perform post-filtering on the projected depth map. We succeeded in obtaining high quality arbitrary viewpoint images from relatively small number of cameras.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Multiview Video Coding Using View Interpolation and Color Correction

Kenji Yamamoto; Masaki Kitahara; Hideaki Kimata; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto; Shinya Shimizu; Kazuto Kamikura; Yoshiyuki Yashima

Neighboring views must be highly correlated in multiview video systems. We should therefore use various neighboring views to efficiently compress videos. There are many approaches to doing this. However, most of these treat pictures of other views in the same way as they treat pictures of the current view, i.e., pictures of other views are used as reference pictures (inter-view prediction). We introduce two approaches to improving compression efficiency in this paper. The first is by synthesizing pictures at a given time and a given position by using view interpolation and using them as reference pictures (view-interpolation prediction). In other words, we tried to compensate for geometry to obtain precise predictions. The second approach is to correct the luminance and chrominance of other views by using lookup tables to compensate for photoelectric variations in individual cameras. We implemented these ideas in H.264/AVC with inter-view prediction and confirmed that they worked well. The experimental results revealed that these ideas can reduce the number of generated bits by approximately 15% without loss of PSNR.


IEEE Communications Magazine | 2014

Image-sensor-based visible light communication for automotive applications

Takaya Yamazato; Isamu Takai; Hiraku Okada; Toshiaki Fujii; Tomohiro Yendo; Shintaro Arai; Michinori Andoh; Tomohisa Harada; Keita Yasutomi; Keiichiro Kagawa; Shoji Kawahito

The present article introduces VLC for automotive applications using an image sensor. In particular, V2I-VLC and V2V-VLC are presented. While previous studies have documented the effectiveness of V2I and V2V communication using radio technology in terms of improving automotive safety, in the present article, we identify characteristics unique to image-sensor-based VLC as compared to radio wave technology. The two primary advantages of a VLC system are its line-of-sight feature and an image sensor that not only provides VLC functions, but also the potential vehicle safety applications made possible by image and video processing. Herein, we present two ongoing image-sensor-based V2I-VLC and V2VVLC projects. In the first, a transmitter using an LED array (which is assumed to be an LED traffic light) and a receiver using a high-framerate CMOS image sensor camera is introduced as a potential V2I-VLC system. For this system, real-time transmission of the audio signal has been confirmed through a field trial. In the second project, we introduce a newly developed CMOS image sensor capable of receiving highspeed optical signals and demonstrate its effectiveness through a V2V communication field trial. In experiments, due to the high-speed signal reception capability of the camera receiver using the developed image sensor, a data transmission rate of 10 Mb/s has been achieved, and image (320 × 240, color) reception has been confirmed together with simultaneous reception of various internal vehicle data, such as vehicle ID and speed.


Proceedings of the IEEE | 2012

FTV for 3-D Spatial Communication

Masayuki Tanimoto; Mehrdad Panahpour Tehrani; Toshiaki Fujii; Tomohiro Yendo

Free-viewpoint TV (FTV) is cutting the frontier of audiovisual communications. FTV is an innovative media that enables us to view 3-D space by freely changing our viewpoints. It also allows us to listen at any listening point in the 3-D space. Since FTV transmits all audiovisual information of the 3-D space, it can reconstruct an audiovisual replica of the 3-D space anywhere and anytime over distance and time. For video, FTV captures a part of rays in 3-D space by using many cameras, and the other rays that are not captured are obtained by interpolating the captured rays. We constructed real-time FTV systems including the complete chain of operation from image capture to display. We also carried out FTV on a laptop computer and a mobile player. For audio, two kinds of free listening-point systems are demonstrated. MPEG regarded FTV as the most challenging 3-D media and has been conducting its international standardization activities. The first phase of FTV was multiview video coding (MVC) and the second phase of FTV is 3-D video (3DV). MVC enables the efficient coding of multiple camera views and was completed in 2009. MVC has been adopted by Blu-ray 3-D. 3DV is a standard that targets serving a variety of 3-D displays and its call for proposals was issued in March 2011.


vehicular technology conference | 2010

Improved Decoding Methods of Visible Light Communication System for ITS Using LED Array and High-Speed Camera

Toru Nagura; Takaya Yamazato; Masaaki Katayama; Tomohiro Yendo; Toshiaki Fujii; Hiraku Okada

In this paper, we consider visible light communication systems using LED array as a transmitter and high-speed camera as a receiver for Intelligent Transport System (ITS). Previously, we have proposed the hierarchical coding scheme which allocates data to spatial frequency components of the image depending on the priority. This scheme is possible to receive information of the high-priority even if communication distance is long. However, we need to distinguish multi-valued data from the received image by using a hierarchical coding. In this paper, we propose two improved decoding methods, and demonstrate to distinguish multi-valued data more correctly in the experiment.


intelligent vehicles symposium | 2005

Road-to-vehicle communication using LED traffic light

Mitsuhiro Wada; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto

We propose a parallel wireless optical communication system for road-to-vehicle communication, which uses a LED traffic light as a transmitter and a high-speed camera as a receiver. The newly proposed system which enables multichanneling in LED two dimension arrangement and spatially dividing them. LED transmitters arranged in the shape of a plane are modulated individually, and a camera is used as a receiver for demodulating the signals by using image processing techniques. We performed examination of the transmission speed according to the proposed system in this paper. The actual proof experiment of the proposed system was conducted under the laboratory conditions. We checked that it communicates with a speed of 2.78kbps while 192 LED are spatially divided into eight groups and the high-speed camera set to 500fps.


international conference on multimedia and expo | 2006

Multi-View Video Coding using View Interpolation and Reference Picture Selection

Masaki Kitahara; Hideaki Kimata; Shinya Shimizu; Kazuto Kamikura; Yoshiyuki Yashima; Kenji Yamamoto; Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto

We propose a new multi-view video coding method using adaptive selection of motion/disparity compensation based on H.264/AVC. One of the key points of the proposed method is the use of view interpolation as a tool for disparity compensation by assigning reference picture indices to interpolated images. Experimental results show that significant gains can be obtained compared to the conventional approach that was often used


ieee intelligent vehicles symposium | 2009

On-vehicle receiver for distant visible light road-to-vehicle communication

Satoshi Okada; Tomohiro Yendo; Takaya Yamazato; Toshiaki Fujii; Masayuki Tanimoto; Yoshikatsu Kimura

In this paper, we propose a road-to-vehicle visible communication system for ITS. In this system, a LED traffic light is used as transmitter and a photodiode is used as receiver. There are several problems associated with applying visible light communication to the field of ITS. It is necessary to receive information from long distance. And tracking the transmitter for a certain moving distance of the vehicle is also important. We applied an imaging optics to receive information over long distance, and two cameras are used to solve the relationship between the transmitter and the receiver position changes with time, and vibrational correction technique is also fixed to the system to minimize vibrational affections. We developed algorithms to track the transmitter. The experiments were conducted to confirm the proposals.


IEEE Journal on Selected Areas in Communications | 2015

Vehicle Motion and Pixel Illumination Modeling for Image Sensor Based Visible Light Communication

Takaya Yamazato; Masayuki Kinoshita; Shintaro Arai; Eisho Souke; Tomohiro Yendo; Toshiaki Fujii; Koji Kamakura; Hiraku Okada

Channel modeling is critical for the design and performance evaluation of visible light communication (VLC). Although a considerable amount of research has focused on indoor VLC systems using single-element photodiodes, there remains a need for channel modeling of VLC systems for outdoor mobile environments. In this paper, we describe and provide results for modeling image sensor based VLC for automotive applications. In particular, we examine the channel model for mobile movements in the image plane as well as channel decay according to the distance between the transmitter and the receiver. Optical flow measurements were conducted for three VLC situations for automotive use: infrastructure to vehicle VLC (I2V-VLC); vehicle to infrastructure VLC (V2I-VLC); and vehicle to vehicle VLC (V2V-VLC). We describe vehicle motion by optical flow with subpixel accuracy using phase-only correlation (POC) analysis and show that a single-pinhole camera model successfully describes these three VLC cases. In addition, the luminance of the central pixel from the projected LED area versus the distance between the LED and the camera was measured. Our key findings are twofold. First, a single-pinhole camera model can be applied to vehicle motion modeling of a I2V-VLC, V2I-VLC, and V2V-VLC. Second, the DC gain at a pixel remains constant as long as the projected image of the transmitter LED occupies several pixels. In other words, if we choose a pixel with highest luminance among the projected image of transmitter LED, the value remains constant, and the signal-to-noise ratio does not change according to the distance.


Journal of Visual Communication and Image Representation | 2010

The Seelinder: Cylindrical 3D display viewable from 360 degrees

Tomohiro Yendo; Toshiaki Fujii; Masayuki Tanimoto; Mehrdad Panahpour Tehrani

We propose a 3D video display technique that allows multiple viewers to see 3D images from a 360-degree horizontal arc without wearing 3D glasses. This technique uses a cylindrical parallax barrier and a one-dimensional light source array. We have developed an experimental display system using this technique. Since this technique is based on the parallax panoramagram, the parallax number and resolution are limited by the diffraction at the parallax barrier. In order to solve this problem, we improved the technique by revolving the parallax barrier. The improved technique was incorporated into two experimental display systems. The newer one is capable of displaying 3D color video images within a 200-mm diameter and a 256-mm height. Images have a resolution of 1254 circumferential pixels and 256 vertical pixels, and are refreshed at 30Hz. Each pixel has a viewing angle of 60 degrees that is divided into over 70 views so that the angular parallax interval of each pixel is less than 1 degree. These pixels are arranged on a cylindrical surface to allow for the produced 3D images to be observed from all directions. In this case, observers may barely perceive the discrete parallax.

Collaboration


Dive into the Tomohiro Yendo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Kamakura

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Norishige Fukushima

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge