Takehiro Tawara
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takehiro Tawara.
international conference on computer graphics and interactive techniques | 2001
Karol Myszkowski; Takehiro Tawara; Hiroyuki Akamine; Hans-Peter Seidel
We present a method for efficient global illumination computation in dynamic environments by taking advantage of temporal coherence of lighting distribution. The method is embedded in the framework of stochastic photon tracing and density estimation techniques. A locally operating energy-based error metric is used to prevent photon processing in the temporal domain for the scene regions in which lighting distribution changes rapidly. A perception-based error metric suitable for animation is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained. Furthermore, the computation cost is reduced compared to the traditional approaches operating solely in the spatial domain.
IEEE Transactions on Visualization and Computer Graphics | 2000
Karol Myszkowski; Przemyslaw Stefan Rokita; Takehiro Tawara
We consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance, we use a combination of a hybrid ray tracing and image-based rendering (IBR) technique and a novel perception-based antialiasing technique. In our rendering solution, we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal animation quality metric (AQM) is used to automatically guide such a hybrid rendering. The image flow (IF) obtained as a byproduct of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity.
spring conference on computer graphics | 2004
Takehiro Tawara; Karol Myszkowski; Kirill Dmitriev; Vlastimil Havran; Cyrille Damez; Hans-Peter Seidel
Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only.
symposium on 3d user interfaces | 2010
Takehiro Tawara; Kenji Ono
We propose a two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in Augmented Reality with a remote controller attached to a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shading including transparency control by changing transfer functions, displaying any cross section, and rendering multi materials using a local illumination model. Our goal is to build a system that facilitates direct manipulation of volumetric CT/MRI data for segmentation in Augmented Reality. Volume segmentation is a challenging problem and segmented data has an important role for visualization and analysis.
Proceedings Theory and Practice of Computer Graphics, 2004. | 2004
Takehiro Tawara; Karol Myszkowski; Hans-Peter Seidel
In this paper we propose an efficient algorithm for handling strong secondary light sources within the photon mapping framework. We introduce an additional photon map as an implicit representation of such light sources. At the rendering stage this map is used for the explicit sampling of strong indirect lighting in a similar way as it is usually performed for primary light sources. Our technique works fully automatically, improves the computation performance, and leads to a better image quality than traditional rendering approaches
international conference on computer graphics and interactive techniques | 2009
Takehiro Tawara; Kenji Ono
We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.
electronic imaging | 2002
Karol Myszkowski; Takehiro Tawara; Hans-Peter Seidel; Bernice E. Rogowitz; Thrasyvoulos N. Pappas
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
VRIPHYS | 2009
Takehiro Tawara; Kenji Ono
The Reality of Virtual Reality is affected by many research areas. Therefo r , individual researches must be combined to achieve the extreme goal of Virtual Reality. In this paper, we pres ent an application of the novel and clever combination of height field wave simulation, photo realistic rendering , a d a 6DOF manipulator exploiting Augmented Reality. In our system, a user can touch a virtual water surfac e with a real pen attached a tracking cube in natural manners. We also take into account rendering optical ef fects like shadows and caustics, which give users a great deal of reality. We show how such a combination is importan t to achieve reality in the videos of our real time demonstrations.
Archive | 2006
Takehiro Tawara; Karol Myszkowski; Xavier Pueyo
The production of high quality animations which feature com pelling lighting effects is computationally a very heavy task when tradition al rendering approaches are used where each frame is computed separately. The fact th at most of the computation must be restarted from scratch for each frame leads to unnecessary redundancy. Since temporal coherence is typically not exploi ted, temporal aliasing problems are also more difficult to address. Many small error s in lighting distribution cannot be perceived by human observers when they are coh erent in temporal domain. However, when such a coherence is lost, the resultin g animations suffer from unpleasant flickering effects. In this thesis, we propose global illumination and renderin g algorithms, which are designed specifically to combat those problems. We achie ve this goal by exploiting temporal coherence in the lighting distribution b etween the subsequent animation frames. Our strategy relies on extending into tem poral domain wellknown global illumination and rendering techniques such as density estimation path tracing, photon mapping, ray tracing, and irradiance c aching, which have been originally designed to handle static scenes only. Our t echniques mainly focus on the computation of indirect illumination, which is th e most expensive part of global illumination modelling.
computer graphics international | 2004
Takehiro Tawara; Karol Myszkowski; Hans-Peter Seidel