Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takaaki Shiratori is active.

Publication


Featured researches published by Takaaki Shiratori.


international conference on computer graphics and interactive techniques | 2011

Motion capture from body-mounted cameras

Takaaki Shiratori; Hyun Soo Park; Leonid Sigal; Yaser Sheikh; Jessica K. Hodgins

Motion capture technology generally requires that recordings be performed in a laboratory or closed stage setting with controlled lighting. This restriction precludes the capture of motions that require an outdoor setting or the traversal of large areas. In this paper, we present the theory and practice of using body-mounted cameras to reconstruct the motion of a subject. Outward-looking cameras are attached to the limbs of the subject, and the joint angles and root pose are estimated through non-linear optimization. The optimization objective function incorporates terms for image matching error and temporal continuity of motion. Structure-from-motion is used to estimate the skeleton structure and to provide initialization for the non-linear optimization procedure. Global motion is estimated and drift is controlled by matching the captured set of videos to reference imagery. We show results in settings where capture would be difficult or impossible with traditional motion capture systems, including walking outside and swinging on monkey bars. The quality of the motion reconstruction is evaluated by comparing our results against motion capture data produced by a commercially available optical system.


international conference on computer graphics and interactive techniques | 2008

Accelerometer-based user interfaces for the control of a physically simulated character

Takaaki Shiratori; Jessica K. Hodgins

In late 2006, Nintendo released a new game controller, the Wiimote, which included a three-axis accelerometer. Since then, a large variety of novel applications for these controllers have been developed by both independent and commercial developers. We add to this growing library with three performance interfaces that allow the user to control the motion of a dynamically simulated, animated character through the motion of his or her arms, wrists, or legs. For comparison, we also implement a traditional joystick/button interface. We assess these interfaces by having users test them on a set of tracks containing turns and pits. Two of the interfaces (legs and wrists) were judged to be more immersive and were better liked than the joystick/button interface by our subjects. All three of the Wiimote interfaces provided better control than the joystick interface based on an analysis of the failures seen during the user study.


european conference on computer vision | 2010

3D reconstruction of a moving point from a series of 2D projections

Hyun Soo Park; Takaaki Shiratori; Iain A. Matthews; Yaser Sheikh

This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.


computer vision and pattern recognition | 2006

Video Completion by Motion Field Transfer

Takaaki Shiratori; Yasuyuki Matsushita; Xiaoou Tang; Sing Bing Kang

Existing methods for video completion typically rely on periodic color transitions, layer extraction, or temporally local motion. However, periodicity may be imperceptible or absent, layer extraction is difficult, and temporally local motion cannot handle large holes. This paper presents a new approach for video completion using motion field transfer to avoid such problems. Unlike prior methods, we fill in missing video parts by sampling spatio-temporal patches of local motion instead of directly sampling color. Once the local motion field has been computed within the missing parts of the video, color can then be propagated to produce a seamless hole-free video. We have validated our method on many videos spanning a variety of scenes. We can also use the same approach to perform frame interpolation using motion fields from different videos.


Computer Graphics Forum | 2006

Dancing-to-Music Character Animation

Takaaki Shiratori; Atsushi Nakazawa; Katsushi Ikeuchi

In computer graphics, considerable research has been conducted on realistic human motion synthesis. However, most research does not consider human emotional aspects, which often strongly affect human motion. This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Our method consists of a motion analysis, a music analysis, and a motion synthesis based on the extracted features. In the analysis steps, motion and music feature vectors are acquired. Motion vectors are derived from motion rhythm and intensity, while music vectors are derived from musical rhythm, structure, and intensity. For synthesizing dance performance, we first find candidate motion segments whose rhythm features are matched to those of each music segment, and then we find the motion segment set whose intensity is similar to that of music segments. Additionally, our system supports having animators control the synthesis process by assigning desired motion segments to the specified music segments. The experimental results indicate that our method actually creates dance performance as if a character was listening and expressively dancing to the music.


human factors in computing systems | 2011

Motionbeam: a metaphor for character interaction with handheld projectors

Karl D.D. Willis; Ivan Poupyrev; Takaaki Shiratori

We present the MotionBeam metaphor for character interaction with handheld projectors. Our work draws from the tradition of pre-cinema handheld projectors that use direct physical manipulation to control projected imagery. With our prototype system, users interact and control projected characters by moving and gesturing with the handheld projector itself. This creates a unified interaction style where input and output are tied together within a single device. We introduce a set of interaction principles and present prototype applications that provide clear examples of the MotionBeam metaphor in use. Finally we describe observations and insights from a preliminary user study with our system.


tangible and embedded interaction | 2013

HideOut: mobile projector interaction with tangible objects and surfaces

Karl D.D. Willis; Takaaki Shiratori; Moshe Mahler

HideOut is a mobile projector-based system that enables new applications and interaction techniques with tangible objects and surfaces. HideOut uses a device mounted camera to detect hidden markers applied with infrared-absorbing ink. The obtrusive appearance of fiducial markers is avoided and the hidden marker surface doubles as a functional projection surface. We present example applications that demonstrate a wide range of interaction scenarios, including media navigation tools, interactive storytelling applications, and mobile games. We explore the design space enabled by the HideOut system and describe the hidden marker prototyping process. HideOut brings tangible objects to life for interaction with the physical world around us.


symposium on computer animation | 2011

A puppet interface for retrieval of motion capture data

Naoki Numaguchi; Atsushi Nakazawa; Takaaki Shiratori; Jessica K. Hodgins

Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.


symposium on computer animation | 2009

Simulating balance recovery responses to trips based on biomechanical principles

Takaaki Shiratori; Brooke Coley; Rakié Cham; Jessica K. Hodgins

To realize the full potential of human simulations in interactive environments, we need controllers that have the ability to respond appropriately to unexpected events. In this paper, we create controllers for the trip recovery responses that occur during walking. Two strategies have been identified in human responses to tripping: impact from an obstacle during early swing leads to an elevating strategy, in which the swing leg is lifted over the obstacle and impact during late swing leads to a lowering strategy, in which a swing leg is positioned immediately in front of the obstacle and then the other leg is swung forward and positioned in front of the body to allow recovery from the fall. We design controllers for both strategies based on the available biomechanical literature and data captured from human subjects in the laboratory. We evaluate our controllers by comparing simulated results and actual responses obtained from a motion capture system.


international conference on multisensor fusion and integration for intelligent systems | 2003

Rhythmic motion analysis using motion capture and musical information

Takaaki Shiratori; Atsushi Nakazawa; Katsushi Ikeuchi

The number of Japanese traditional dancers is decreasing. Without performers, some dances will disappear because conventional media such as paper cannot record them. We have proposed an archiving method specifically for dancing patterns. Our method has four main stages: 1) digitizing motions by motion capture systems; 2) analyzing motions; 3) synthesizing motions; and 4) reproducing the dance motions by CG (computer-generated) and humanoid robots. In order to effectively record the movement patterns, motion primitives are extracted. Each motion primitive describes each basic motion. However, most previous primitive extraction failed to provide the rhythm, which results in unrhythmic, synthesized motion. In this paper, we propose a motion analysis method, which integrates music rhythm into the motion primitives. Our experiment confirmed that our motion analysis yielded motion primitives in accordance to the music rhythm.

Collaboration


Dive into the Takaaki Shiratori's collaboration.

Top Co-Authors

Avatar

Katsushi Ikeuchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shunsuke Kudoh

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Soo Park

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Karl D.D. Willis

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge