Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuntaro Yamazaki is active.

Publication


Featured researches published by Shuntaro Yamazaki.


international conference on computer graphics and interactive techniques | 2008

Interaction patches for multi-character animation

Hubert P. H. Shum; Taku Komura; Masashi Shiraishi; Shuntaro Yamazaki

We propose a data-driven approach to automatically generate a scene where tens to hundreds of characters densely interact with each other. During off-line processing, the close interactions between characters are precomputed by expanding a game tree, and these are stored as data structures called interaction patches. Then, during run-time, the system spatio-temporally concatenates the interaction patches to create scenes where a large number of characters closely interact with one another. Using our method, it is possible to automatically or interactively produce animations of crowds interacting with each other in a stylized way. The method can be used for a variety of applications including TV programs, advertisements and movies.


computer vision and pattern recognition | 2011

Simultaneous self-calibration of a projector and a camera using structured light

Shuntaro Yamazaki; Masaaki Mochimaru; Takeo Kanade

We propose a method for geometric calibration of an active vision system, composed of a projector and a camera, using structured light projection. Unlike existing methods of self-calibration for projector-camera systems, our method estimates the intrinsic parameters of both the projector and the camera as well as extrinsic parameters except a global scale without any calibration apparatus such as a checker-pattern board. Our method is based on the decomposition of a radial fundamental matrix into intrinsic and extrinsic parameters. Dense and accurate correspondences are obtained utilizing structured light patterns consisting of Gray code and phase-shifting sinusoidal code. To alleviate the sensitivity issue in estimating and decomposing the radial fundamental matrix, we propose an optimization approach that guarantees the possible solution using a prior for the principal points. We demonstrate the stability of our method using several examples and evaluate the system quantitatively and qualitatively.


interactive 3d graphics and games | 2008

Simulating interactions of avatars in high dimensional state space

Hubert P. H. Shum; Taku Komura; Shuntaro Yamazaki

Efficient computation of strategic movements is essential to control virtual avatars intelligently in computer games and 3D virtual environments. Such a module is needed to control non-player characters (NPCs) to fight, play team sports or move through a mass crowd. Reinforcement learning is an approach to achieve real-time optimal control. However, the huge state space of human interactions makes it difficult to apply existing learning methods to control avatars when they have dense interactions with other characters. In this research, we propose a new methodology to efficiently plan the movements of an avatar interacting with another. We make use of the fact that the subspace of meaningful interactions is much smaller than the whole state space of two avatars. We efficiently collect samples by exploring the subspace where dense interactions between the avatars occur and favor samples that have high connectivity with the other samples. Using the collected samples, a finite state machine (FSM) called Interaction Graph is composed. At run-time, we compute the optimal action of each avatar by minmax search or dynamic programming on the Interaction Graph. The methodology is applicable to control NPCs in fighting and ball-sports games.


IEEE Transactions on Visualization and Computer Graphics | 2012

Simulating Multiple Character Interactions with Collaborative and Adversarial Goals

Hubert P. H. Shum; Taku Komura; Shuntaro Yamazaki

This paper proposes a new methodology for synthesizing animations of multiple characters, allowing them to intelligently compete with one another in dense environments, while still satisfying requirements set by an animator. To achieve these two conflicting objectives simultaneously, our method separately evaluates the competition and collaboration of the interactions, integrating the scores to select an action that maximizes both criteria. We extend the idea of min-max search, normally used for strategic games such as chess. Using our method, animators can efficiently produce scenes of dense character interactions such as those in collective sports or martial arts. The method is especially effective for producing animations along story lines, where the characters must follow multiple objectives, while still accommodating geometric and kinematic constraints from the environment.


virtual reality software and technology | 2007

Simulating competitive interactions using singly captured motions

Hubert P. H. Shum; Taku Komura; Shuntaro Yamazaki

It is difficult to create scenes where multiple avatars are fighting / competing with each other. Manually creating the motions of avatars is time consuming due to the correlation of the movements between the avatars. Capturing the motions of multiple avatars is also difficult as it requires a huge amount of post-processing. In this paper, we propose a new method to generate a realistic scene of avatars densely interacting in a competitive environment. The motions of the avatars are considered to be captured individually, which will increase the easiness of obtaining the data. We propose a new algorithm called the temporal expansion approach which maps the continuous time action plan to a discrete space such that turn-based evaluation methods can be used. As a result, many mature algorithms in game such as the min-max search and α---β pruning can be applied. Using our method, avatars will plan their strategies taking into account the reaction of the opponent. Fighting scenes with multiple avatars are generated to demonstrate the effectiveness of our algorithm. The proposed method can also be applied to other kinds of continuous activities that require strategy planning such as sport games.


international conference on computer vision | 2007

Coplanar Shadowgrams for Acquiring Visual Hulls of Intricate Objects

Shuntaro Yamazaki; Srinivasa G. Narasimhan; Simon Baker; Takeo Kanade

Acquiring 3D models of intricate objects (like tree branches, bicycles and insects) is a hard problem due to severe self-occlusions, repeated thin structures and surface discontinuities. In theory, a shape-from-silhouettes (SFS) approach can overcome these difficulties and use many views to reconstruct visual hulls that are close to the actual shapes. In practice, however, SFS is highly sensitive to errors in silhouette contours and the calibration of the imaging system, and therefore not suitable for obtaining reliable shapes with a large number of views. We present a practical approach to SFS using a novel technique called coplanar shadowgram imaging, that allows us to use dozens to even hundreds of views for visual hull reconstruction. Here, a point light source is moved around an object and the shadows (silhouettes) cast onto a single background plane are observed. We characterize this imaging system in terms of image projection, reconstruction ambiguity, epipolar geometry, and shape and source recovery. The coplanarity of the shadowgrams yields novel geometric properties that are not possible in traditional multi-view camera- based imaging systems. These properties allow us to derive a robust and automatic algorithm to recover the visual hull of an object and the 3D positions of light source simultaneously, regardless of the complexity of the object. We demonstrate the acquisition of several intricate shapes with severe occlusions and thin structures, using 50 to 120 views.


british machine vision conference | 2011

Hamming Color Code for Dense and Robust One-shot 3D Scanning

Shuntaro Yamazaki; Akira Nukada; Masaaki Mochimaru

We propose a novel color code, Hamming color code, designed for rapid 3D shape acquisition using structured light projection. The Hamming color code has several properties which are desirable for practical 3D acquisition as follows. First, the Hamming distance of adjacent colors is always 1, which makes the color detection robust to color blending due to defocusing, subsurface scattering, or chromatic aberration. Second, the substrings of a certain length is guaranteed to be unique. In other words, the Hamming code can be viewed as a subset of de Bruijn sequence. Third, a one-dimensional coordinate can be encoded for each pixel, which enables dense 3D reconstruction from a single pattern projection. Thanks to the uniqueness and robustness of the substrings, the structured light can be decoded stably by dynamic programming. We have implemented parallel dynamic programming on GPU and achieved the speed-up by a factor of 630 compared to the CPU-based implementation, and accomplished video-rate 3D acquisition using commodity hardware. Several experiments have been conducted to demonstrate the stability and performance of our algorithm. Finally we discuss the limitation and future direction of this work.


asian conference on computer vision | 2006

Inverse volume rendering approach to 3d reconstruction from multiple images

Shuntaro Yamazaki; Masaaki Mochimaru; Takeo Kanade

This paper presents a method of image-based 3D modeling for intricately-shaped objects, such as a fur, tree leaves and human hair. We formulate the imaging process of these small geometric structures as volume rendering followed by image matting, and prove that the inverse problem can be solved by reducing the nonlinear equations to a large linear system. This estimation, which we call inverse volume rendering, can be performed efficiently through expectation maximization method, even when the linear system is under-constrained owing to data sparseness. We reconstruct object shape by a set of coarse voxels that can model the spatial occupancy inside each voxel. Experimental results show that intricately-shaped objects can successfully be modeled by our proposed method, and the original and other novel view-images of the objects can be synthesized by forward volume rendering.


International Journal of Computer Vision | 2009

The Theory and Practice of Coplanar Shadowgram Imaging for Acquiring Visual Hulls of Intricate Objects

Shuntaro Yamazaki; Srinivasa G. Narasimhan; Simon Baker; Takeo Kanade

Acquiring 3D models of intricate objects (like tree branches, bicycles and insects) is a challenging task due to severe self-occlusions, repeated thin structures, and surface discontinuities. In theory, a shape-from-silhouettes (SFS) approach can overcome these difficulties and reconstruct visual hulls that are close to the actual shapes, regardless of the complexity of the object. In practice, however, SFS is highly sensitive to errors in silhouette contours and the calibration of the imaging system, and has therefore not been used for obtaining accurate shapes with a large number of views. In this work, we present a practical approach to SFS using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction. A point light source is moved around an object and the shadows (silhouettes) cast onto a single background plane are imaged. We characterize this imaging system in terms of image projection, reconstruction ambiguity, epipolar geometry, and shape and source recovery. The coplanarity of the shadowgrams yields unique geometric properties that are not possible in traditional multi-view camera-based imaging systems. These properties allow us to derive a robust and automatic algorithm to recover the visual hull of an object and the 3D positions of the light source simultaneously, regardless of the complexity of the object. We demonstrate the acquisition of several intricate shapes with severe occlusions and thin structures, using 50 to 120 views.


international conference on pattern recognition | 2014

Extracting Watermark from 3D Prints

Shuntaro Yamazaki; Satoshi Kagami; Masaaki Mochimaru

We propose a method of extracting watermark from 3D prints created from 3D mesh data. The watermark is embedded to the 3D mesh in the spectral domain using a robust, imperceptible, and informed algorithm based on the spread spectrum technique, and then extracted from 3D prints by reconstructing the 3D mesh homologous to the original. A suspect 3D mesh is reconstructed accurately and robustly from noisy and incomplete 3D scans of the 3D prints, by optimizing a sparse subset of the spectrum using a variant of the iterative closest point technique. The mesh is registered to the original mesh by construction, in terms of geometry and topology, which allows us to extract a suspect watermark by simple algebraic operations. Comprehensive experiments have been conducted to demonstrate the performance of our method using standard 3D datasets and multiple 3D printers. The proposed method significantly outperforms prior methods under different conditions in the practical scenario. The probability of false positive detection is kept less than 106 for the most cases of simulated and real data.

Collaboration


Dive into the Shuntaro Yamazaki's collaboration.

Top Co-Authors

Avatar

Masaaki Mochimaru

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taku Komura

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satoshi Kagami

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akira Nukada

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Makiko Kouchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge