Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takeshi Naemura is active.

Publication


Featured researches published by Takeshi Naemura.


Optics Express | 2001

3-D computer graphics based on integral photography

Takeshi Naemura; T. Yoshida; Hiroshi Harashima

Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.


Signal Processing-image Communication | 2006

Layered light-field rendering with focus measurement

Keita Takahashi; Takeshi Naemura

Abstract This paper introduces a new image-based rendering method that uses input from an array of cameras and synthesizes high-quality free-viewpoint images in real-time. The input cameras can be roughly arranged, if they are calibrated in advance. Our method uses a set of depth layers to deal with scenes with large depth ranges, but does not require prior knowledge of the scene geometry. Instead, during the on-the-fly process, the optimal depth layer is automatically assigned to each pixel on the synthesized image by using our focus measurement scheme. We implemented the rendering method and achieved nearly interactive frame rates on a commodity PC. This paper also discusses the focus measurement scheme in both spatial and frequency domains. The discussion in the spatial domain is practical since it can be applied for arbitrary camera arrays. On the other hand, the frequency domain analysis is theoretically interesting since it proves that a signal-processing theory is applicable to the depth assignment problem.


IEEE Computer Graphics and Applications | 2004

Thermo-key: human region segmentation from video

Kazutaka Yasuda; Takeshi Naemura; Hiroshi Harashima

We focus on the segmentation of human regions, an area that has had a significant amount of research. Our approach, is based on invisible thermal information that we can measure without any estimation. In fact, we measure infrared light radiation of the thermal information for keys hence we call our proposed system thermo-key. A color and thermal combination camera measures the temperature distribution of its field of view. This system segments the human region from the video sequence - captured by the color camera in real time - with high robustness against lighting and background conditions.


electronic imaging | 2008

Laser-plasma scanning 3D display for putting digital contents in free space

Hideo Saito; Hidei Kimura; Satoru Shimada; Takeshi Naemura; Jun Kayahara; Songkran Jarusirisawad; Vincent Nozick; Hiroyo Ishikawa; Toshiyuki Murakami; Jun Aoki; Akira Asano; T. Kimura; Masayuki Kakehata; Fumio Sasaki; Hidehiko Yashiro; Masahiko Mori; Kenji Torizuka; Kouta Ino

We present a novel 3D display that can show any 3D contents in free space using laser-plasma scanning in the air. The laser-plasma technology can generate a point illumination at an arbitrary position in the free space. By scanning the position of the illumination, we can display a set of point illuminations in the space, which realizes 3D display in the space. This 3D display has been already presented in Emerging Technology of SIGGRAPH2006, which is the basic platform of our 3D display project. In this presentation, we would like to introduce history of the development of the laser-plasma scanning 3D display, and then describe recent development of the 3D contents analysis and processing technology for realizing an innovative media presentation in a free 3D space. The one of recent development is performed to give preferred 3D contents data to the 3D display in a very flexible manner. This means that we have a platform to develop an interactive 3D contents presentation system using the 3D display, such as an interactive art presentation using the 3D display. We would also like to present the future plan of this 3D display research project.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008

Real-Time All-in-Focus Video-Based Rendering Using A Network Camera Array

Yuichi Taguchi; Keita Takahashi; Takeshi Naemura

We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a Gigabit Ethernet. To render a high-quality novel view, we estimate a view-dependent per-pixel depth map in real-time by using a layered representation. The rendering algorithm is fully implemented on a GPU, which allows our system to efficiently use CPU and GPU independently and in parallel. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 fps depending on rendering parameters. Experimental results show high-quality images synthesized from various scenes.


ieee virtual reality conference | 2002

Virtual shadows-enhanced interaction in mixed reality environment

Takeshi Naemura; T. Nitta; Atsushi Mimura; Hiroshi Harashima

We propose the concepts of virtual light and virtual shadow with the aim of achieving a mixed reality environment focused on shadows. In this proposal, we divide the concept of virtual shadow into four categories, and among them, implement four types of interactive applications: (a) real to virtual shadows for rigid objects; (b) real to virtual shadows for non-rigid objects; (c) image-based virtual to virtual shadows, and (d) virtual to real shadows. In these applications, we can see the shadow of a real object projected onto the virtual world and vice versa. These proposed concepts should contribute to the realization of a mixed reality environment that provides a novel sense of interaction.


international conference on computer graphics and interactive techniques | 2008

ForceTile: tabletop tangible interface with vision-based force distribution sensing

Yasuaki Kakehi; Kensei Jo; Katsunori Sato; Kouta Minamizawa; Hideaki Nii; Naoki Kawakami; Takeshi Naemura; Susumu Tachi

Today, placing physical objects on a tabletop display is common for intuitive tangible input [Ullmer and Ishii 1997]. The overall goal of our project is to increase the interactivity of tabletop tangible interfaces. To achieve this goal, we propose a novel tabletop tangible interface named ‘ForceTile.’ This interface can detect the force distribution on its surface as well as its position, rotation and ID by using a vision-based approach. In our previous optical force sensors “GelForce” [Kamiyama et al. 2004], an elastic body and cameras are fixed together. Contrarily, on this system, users can place and move multiple tile-shaped interfaces on the tabletop display freely. Furthermore, users can interact with projected images on the tabletop screen by moving, pushing or pinching the ForceTiles.


eurographics | 2005

Glare Generation Based on Wave Optics

Masanori Kakimoto; Kaoru Matsuoka; Tomoyuki Nishita; Takeshi Naemura; Hiroshi Harashima

This paper proposes a novel and general method of glare generation based on wave optics. A glare image is regarded as a result of Fraunhofer diffraction, which is equivalent to a 2D Fourier transform of the image of given apertures or obstacles. In conventional methods, the shapes of glare images are categorized according to their source apertures, such as pupils and eyelashes and their basic shapes (e.g. halos, coronas, or radial streaks) are manually generated as templates, mainly based on statistical observation. Realistic variations of these basic shapes often depend on the use of random numbers. Our proposed method computes glare images fully automatically from aperture images and can be applied universally to all kinds of apertures, including camera diaphragms. It can handle dynamic changes in the position of the aperture relative to the light source, which enables subtle movement or rotation of glare streaks. Spectra can also be simulated in the glare, since the intensity of diffraction depends on the wavelength of light. The resulting glare image is superimposed onto a given computer‐generated image containing high‐intensity light sources or reflections, aligning the center of the glare image to the high‐intensity areas. Our method is implemented as a multipass rendering software. By precomputing the dynamic glare image set and putting it into texture memory, the software runs at an interactive rate.


IEEE MultiMedia | 2004

Internet communication using real-time facial expression analysis and synthesis

Naiwala P. Chandrasiri; Takeshi Naemura; Mitsuru Ishizuka; Hiroshi Harashima; István Barakonyi

In this paper, the authors have developed a system that animates 3D facial agents based on real-time facial expression analysis techniques and research on synthesizing facial expressions and text-to-speech capabilities. This system combines visual, auditory, and primary interfaces to communicate one coherent multimodal chat experience. Users can represent themselves using agents they select from a group that we have predefined. When a user shows a particular expression while typing a text, the 3D agent at the receiving end speaks the message aloud while it replays the recognized facial expression sequences and also augments the synthesized voice with appropriate emotional content. Because the visual data exchange is based on the MPEG-4 high-level Facial Animation Parameter for facial expressions (FAP 2), rather than real-time video, the method requires very low bandwidth.


visual communications and image processing | 1998

Real-time video-based rendering for augmented spatial communication

Takeshi Naemura; Hiroshi Harashima

In the field of 3-D image communication and virtual reality, it is very important to establish a method of displaying arbitrary views of a 3-D scene. It is sure that the 3-D geometric models of scene objects are very useful for this purpose, since computer graphics techniques can synthesize arbitrary views of the models. It is, however, not so easy to obtain the models of objects in the physical world. In order to avoid this problem, a new technique, called image-based rendering, has been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. To date, most of the works on this new technique has been concentrated on static scenes or objects. In order to cope with 3-D scenes in motion, we must establish the ways of processing multiple video sequences in real-time, and constructing accurate camera array system. In this paper, the authors propose a real-time method of rendering arbitrary views of 3-D scenes in motion. The proposed method realizes a sixteen camera array system with software adjusting support and a video-based rendering system. According to the observers viewpoint, appropriate views of 3- D scenes are synthesized in real-time. Experimental results show the potential applicability of the proposed method to the augmented spatial communication systems.

Collaboration


Dive into the Takeshi Naemura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masahide Kaneko

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge