Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taiki Fukiage is active.

Publication


Featured researches published by Taiki Fukiage.


tests and proofs | 2016

Deformation Lamps: A Projection Technique to Make Static Objects Perceptually Dynamic

Takahiro Kawabe; Taiki Fukiage; Masataka Sawayama; Shin'ya Nishida

Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observers brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an objects appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies.


Journal of Vision | 2011

A flash-drag effect in random motion reveals involvement of preattentive motion processing

Taiki Fukiage; David Whitney; Ikuya Murakami

The flash-drag (FDE) effect refers to the phenomenon in which the position of a stationary flashed object in one location appears shifted in the direction of nearby motion. Over the past decade, it has been debated how bottom-up and top-down processes contribute to this illusion. In this study, we demonstrate that randomly phase-shifting gratings can produce the FDE. In the random motion sequence we used, the FDE inducer (a sinusoidal grating) jumped to a random phase every 125 ms and stood still until the next jump. Because this random sequence could not be tracked attentively, it was impossible for the observer to discern the jump direction at the time of the flash. By sorting the data based on the flashs onset time relative to each jump time in the random motion sequence, we found that a large FDE with a broad temporal tuning occurred around 50 to 150 ms before the jump and that this effect was not correlated with any other jumps in the past or future. These results suggest that as few as two frames of unpredictable apparent motion can preattentively cause the FDE with a broad temporal tuning.


international symposium on mixed and augmented reality | 2012

Reduction of contradictory partial occlusion in mixed reality by using characteristics of transparency perception

Taiki Fukiage; Takeshi Oishi; Katsushi Ikeuchi

One of the challenges in mixed reality (MR) applications is handling contradictory occlusions between real and virtual objects. The previous studies have tried to solve the occlusion problem by extracting the foreground region from the real image. However, real-time occlusion handling is still difficult since it takes too much computational cost to precisely segment foreground regions in a complex scene. In this study, therefore, we proposed an alternative solution to the occlusion problem that does not require precise foreground-background segmentation. In our method, a virtual object is blended with a real scene so that the virtual object can be perceived as being behind the foreground region. For this purpose, we first investigated characteristics of human transparency perception in a psychophysical experiment. Then we made a blending algorithm applicable to real scenes based on the results of the experiment.


international symposium on mixed and augmented reality | 2014

Visibility-based blending for real-time applications

Taiki Fukiage; Takeshi Oishi; Katsushi Ikeuchi

There are many situations in which virtual objects are presented half-transparently on a background in real time applications. In such cases, we often want to show the object with constant visibility. However, using the conventional alpha blending, visibility of a blended object substantially varies depending on colors, textures, and structures of the background scene. To overcome this problem, we present a framework for blending images based on a subjective metric of visibility. In our method, a blending parameter is locally and adaptively optimized so that visibility of each location achieves the targeted level. To predict visibility of an object blended by an arbitrary parameter, we utilize one of the error visibility metrics that have been developed for image quality assessment. In this study, we demonstrated that the metric we used can linearly predict visibility of a blended pattern on various texture images, and showed that the proposed blending methods can work in practical situations assuming augmented reality.


Vision Research | 2010

The tilt aftereffect occurs independently of the flash-lag effect

Taiki Fukiage; Ikuya Murakami

The flash-lag effect refers to the phenomenon where a flash of a stationary stimulus presented adjacent to a moving stimulus appears to lag behind it. We investigated whether the flash-lag effect affected the tilt aftereffect using two sets of vertical gratings for a flash and a moving stimulus that created a specific orientation when aligned with a specific temporal offset. Our results show that a change in the perceptual appearance of stimuli in the presence of the flash-lag effect had a negligible influence on the tilt aftereffect. These data suggest that the flash-lag effect originates at a different neural processing stage than the early linear processing that presumably mediates the tilt aftereffect.


Journal of Vision | 2014

A simple photometric factor in perceived depth order of bistable transparency patterns.

Taiki Fukiage; Takeshi Oishi; Katsushi Ikeuchi

Previous studies on perceptual transparency defined the photometric condition in which perceived depth ordering between two surfaces becomes ambiguous. Even under this bistable transparency condition, it is known that depth-order perceptions are often biased toward one specific interpretation (Beck, Prazdny, & Ivry, 1984; Delogu, Fedorov, Belardinelli, & van Leeuwen, 2010; Kitaoka, 2005; Oyama & Nakahara, 1960). In this study, we examined what determines the perceived depth ordering for bistable transparency patterns using stimuli that simulated two partially overlapping disks resulting in four regions: a (background), b (portion of right disk), p (portion of left disk), and q (shared region). In contrast to the previous theory that proposed contributions of contrast against the background region (i.e., contrast at contour b/a and contrast at contour p/a) to perceived depth order in bistable transparency patterns, the present study demonstrated that contrast against the background region has little influence on perceived depth order compared with contrast against the shared region (i.e., contrast at contour b/q and contrast at contour p/q). In addition, we found that the perceived depth ordering is well predicted by a simpler model that takes into consideration only relative size of lightness difference against the shared region. Specifically, the probability that the left disk is perceived as being in front is proportional to (|b - q| - |p - q|) / (|b - q| + |p - q|) calculated based on lightness.


Journal of Vision | 2013

Adaptation to a spatial offset occurs independently of the flash-drag effect

Taiki Fukiage; Ikuya Murakami

Visual motion can influence the perceived position of an object. For example, in the flash-drag effect, the position of a stationary flashed object at one location appears to shift in the direction of motion presented at another location in the visual field (Whitney & Cavanagh, 2000). The results of previous physiological studies suggest interactions between motion and position information in very early retinotopic areas. However, it is unclear whether the position information that has been distorted by motion further influences the visual processing stage at which adaptable position mechanisms may exist. To examine this, we presented two Gabor patches, each of which was adjacent to oppositely moving inducers, and investigated whether adaptation to the illusory spatial offset caused by the flash-drag effect induced the position aftereffect. Our results show that a change in the perceived offset in the presence of the flash-drag effect did not influence the position aftereffect. These results indicate that internal representations of positions altered by the presence of nearby motion signals do not feed into the mechanism underlying the position aftereffect.


Annual Review of Vision Science | 2018

Motion Perception: From Detection to Interpretation

Shin'ya Nishida; Takahiro Kawabe; Masataka Sawayama; Taiki Fukiage

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.


ACM Transactions on Graphics | 2017

Hiding of phase-based stereo disparity for ghost-free viewing without glasses

Taiki Fukiage; Takahiro Kawabe; Shin'ya Nishida


arXiv: Computer Vision and Pattern Recognition | 2017

Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality.

Menandro Roxas; Tomoki Hori; Taiki Fukiage; Yasuhide Okamoto; Takeshi Oishi

Collaboration


Dive into the Taiki Fukiage's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masataka Sawayama

Japan Society for the Promotion of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Whitney

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge