Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryosuke Ichikari is active.

Publication


Featured researches published by Ryosuke Ichikari.


ACM Transactions on Graphics | 2014

Driving High-Resolution Facial Scans with Video Performance Capture

Graham Fyffe; Andrew Jones; Oleg Alexander; Ryosuke Ichikari; Paul E. Debevec

We present a process for rendering a realistic facial performance with control of viewpoint and illumination. The performance is based on one or more high-quality geometry and reflectance scans of an actor in static poses, driven by one or more video streams of a performance. We compute optical flow correspondences between neighboring video frames, and a sparse set of correspondences between static scans and video frames. The latter are made possible by leveraging the relightability of the static 3D scans to match the viewpoint(s) and appearance of the actor in videos taken in arbitrary environments. As optical flow tends to compute proper correspondence for some areas but not others, we also compute a smoothed, per-pixel confidence map for every computed flow, based on normalized cross-correlation. These flows and their confidences yield a set of weighted triangulation constraints among the static poses and the frames of a performance. Given a single artist-prepared face mesh for one static pose, we optimally combine the weighted triangulation constraints, along with a shape regularization term, into a consistent 3D geometry solution over the entire performance that is drift free by construction. In contrast to previous work, even partial correspondences contribute to drift minimization, for example, where a successful match is found in the eye region but not the mouth. Our shape regularization employs a differential shape term based on a spatially varying blend of the differential shapes of the static poses and neighboring dynamic poses, weighted by the associated flow confidences. These weights also permit dynamic reflectance maps to be produced for the performance by blending the static scan maps. Finally, as the geometry and maps are represented on a consistent artist-friendly mesh, we render the resulting high-quality animated face geometry and animated reflectance maps using standard rendering tools.


international conference on computer graphics and interactive techniques | 2011

Enabling on-set stereoscopic MR-based previsualization for 3D filmmaking

Shohei Mori; Ryosuke Ichikari; Fumihisa Shibata; Asako Kimura; Hideyuki Tamura

PreViz is a computer graphics movie that represents desired scenes in preproduction and is useful for sharing the directors imagination among crews. Therefore, film industry looks upon PreViz as one of the most important processes in filmmaking as the processes become complicated [The Previs Society 2011]. Stereoscopic 3D (S3D) movie production not only changed ways of expressiveness but also changed ways of production and tools. Consequently, it requires more careful planning and PreViz is still important. This paper describes a prototype of S3D MR-PreViz system for HD S3D PreViz shooting using mixed reality (MR) technologies in order to facilitate PreViz making in S3D. This system is designed as an extension of MR-PreViz for classic filmmaking [Ichikari et al. 2010].


conference on visual media production | 2011

Head-Mounted Photometric Stereo for Performance Capture

Andrew Jones; Graham Fyffe; Xueming Yu; Wan-Chun Ma; Jay Busch; Ryosuke Ichikari; Mark T. Bolas; Paul E. Debevec

Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.


international conference on computer graphics and interactive techniques | 2013

Driving high-resolution facial blendshapes with video performance capture

Graham Fyffe; Andrew Jones; Oleg Alexander; Ryosuke Ichikari; Paul Graham; Koki Nagano; Jay Busch; Paul E. Debevec

We present a technique for creating realistic facial animation from a set of high-resolution static scans of an actors face driven by passive video of the actor from one or more viewpoints. We capture high-resolution static geometry using multi-view stereo and gradient-based photometric stereo [Ghosh et al. 2011]. The scan set includes around 30 expressions largely inspired by the Facial Action Coding System (FACS). Examples of the input scan geometry can be seen in Figure 1 (a). The base topology is defined by an artist for the neutral scan of each subject. The dynamic performance can be shot under existing environmental illumination using one or more off-the shelf HD video cameras.


international conference on computer vision | 2010

Computer vision technology applied to MR-based pre-visualization in filmmaking

Hideyuki Tamura; Takashi Matsuyama; Naokazu Yokoya; Ryosuke Ichikari; Shohei Nobuhara; Tomokazu Sato

In this talk, we introduce the outline of gThe MR-PreViz Projecth performed in Japan. In the pre-production process of filmmaking, PreViz, pre-visualizing the desired scene by CGI, is used as a new technique. In its advanced approach, we propose MR-PreViz to utilize mixed reality technology as in current PreViz. MR-PreViz makes it possible to merge the real background and the computer-generated humans and creatures in an open set or at an outdoor location. Computer vision technologies are required for many aspects of MR-PreViz. For capturing an actors action, we applied 3D Video, which is a technology that allows one to reconstruct an image seen from any viewpoint in real time from video images taken by multiple cameras. As the other application of CV, we developed a vision based camera tracking method. The method collects environmental information required for tracking efficiently using a structure-from-motion technique before the shooting. Additionally, we developed a relighting technique for lighting design of MR-PreViz movie.


computer animation and social agents | 2016

Rapid Photorealistic Blendshape Modeling from RGB-D Sensors

Dan Casas; Andrew W. Feng; Oleg Alexander; Graham Fyffe; Paul E. Debevec; Ryosuke Ichikari; Hao Li; Kyle Olszewski; Evan A. Suma; Ari Shapiro

Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no artistic expertise to operate, takes 15 seconds to capture and generate a single facial expression, and approximately 1 minute of processing time per expression to transform it into a blendshape model. Our main contributions include a complete end-to-end pipeline for capturing and generating photorealistic blendshape models automatically and a registration method that solves dense correspondences between two face scans by utilizing facial landmarks detection and optical flows. We demonstrate the effectiveness of the proposed method by capturing different human subjects with a variety of sensors and puppeteering their 3D faces with real-time facial performance retargeting. The rapid nature of our method allows for just-in-time construction of a digital face. To that end, we also integrated our pipeline with a virtual reality facial performance capture system that allows dynamic embodiment of the generated faces despite partial occlusion of the users real face by the head-mounted display.


international symposium on mixed and augmented reality | 2015

[POSTER] Road Maintenance MR System Using LRF and PDR

Ching-Tzun Chang; Ryosuke Ichikari; Koji Makita; Takashi Okuma; Takeshi Kurata

We have been developing a mixed reality system to support road maintenance using overlaid visual aids. Such a system requires a positioning method that can provide sub-meter accuracy and function even if the appearance of the road surface changes significantly caused by many factors such as construction phase, time and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by integrating laser range finder (LRF) and pedestrian dead-reckoning (PDR) data. In the field, multiple workers move around the workspace. Therefore, it is necessary to determine corresponding pairs of PDR-based and LRF-based trajectories by identifying similar trajectories. In this study, we propose a method to calculate the similarity between trajectories and a procedure to integrate corresponding pairs of trajectories to acquire the position and movement direction of a worker.


interactive 3d graphics and games | 2015

Rapid photorealistic blendshapes from commodity RGB-D sensors

Dan Casas; Oleg Alexander; Andrew W. Feng; Graham Fyffe; Ryosuke Ichikari; Paul E. Debevec; Ruizhe Wang; Evan A. Suma; Ari Shapiro

Creating and animating a realistic 3D human face has been an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. In this paper, we propose a system to capture photorealistic 3D faces and generate the blendshape models automatically using only a single commodity RGB-D sensor. Our method can rapidly generate a set of expressive facial poses from a single Microsoft Kinect and requires no artistic expertise on the part of the capture subject. The system takes only a matter of seconds to capture and produce a 3D facial pose and only requires 4 minutes of processing time to transform it into a blendshape model. Our main contributions include an end-to-end pipeline for capturing and generating face blendshape models automatically, and a registration method that solves dense correspondences between two face scans by utilizing facial landmark detection and optical flow. We demonstrate the effectiveness of the proposed method by capturing 3D facial models of different human subjects and puppeteering their models in an animation system with real-time facial performance retargeting.


international conference on computer graphics and interactive techniques | 2010

On-site real-time 3D match move for MR-based previsualization with relighting

Ryosuke Ichikari; Kaori Kikuchi; Wataru Toishita; Ryuhei Tenmoku; Fumihisa Shibata; Hideyuki Tamura

We are developing a previsualization method called MR-PreViz, which utilizes mixed reality technology for filmmaking [Tenmoku et al. 2006]. To determine camera-work at the shooting site, estimating camera position and posture is required. In this paper, we introduce a method for on-site real-time 3D match move and relighting for MR-PreViz. To realize the match move, we developed a computer vision-based camera tracking method using natural feature tracking. This method is based on details about the site captured in advance. The method can automatically construct a feature landmark database (LMDB) using a fiducial marker. Moreover, the result of the method enables MR-PreViz to design lighting for the site using a relighting method. To add lighting effects to the real objects, the relighting method uses reflectance properties of the real objects and LMDB.


international conference on computer graphics and interactive techniques | 2009

Designing cinematic lighting by relighting in MR-based pre-visualization

Ryosuke Ichikari; Ryohei Hatano; Toshikazu Ohshima; Fumihisa Shibata; Hideyuki Tamura

This paper describes a relighting method of designing cinematic lighting for filmmaking. The relighting method enables mixed reality based pre-visualization called MR-PreViz to change conditions of illumination. The method allows the MR-PreViz to have additional virtual lighting and the removal of actual illumination in designing cinematic lighting. The effects of lighting are applied correctly to both real objects and virtual objects.

Collaboration


Dive into the Ryosuke Ichikari's collaboration.

Top Co-Authors

Avatar

Hideyuki Tamura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Graham Fyffe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Oleg Alexander

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew W. Feng

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge