Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsukasa Fukusato is active.

Publication


Featured researches published by Tsukasa Fukusato.


international conference on computer graphics and interactive techniques | 2014

Efficient video viewing system for racquet sports with automatic summarization focusing on rally scenes

Shunya Kawamura; Tsukasa Fukusato; Tatsunori Hirai; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 Efficient Video Viewing System for Racquet Sports with Automatic Summarization Focusing on Rally Scenes


international conference on information visualization theory and applications | 2016

RSViewer: An Efficient Video Viewer for Racquet Sports Focusing on Rally Scenes

Shunya Kawamura; Tsukasa Fukusato; Tatsunori Hirai; Shigeo Morishima

This paper presents RSViewer, a video browsing system specialized for racquet sports, which reflects users’ interests. Methods to support users in browsing racquet sports matches by summarizing video composed of important rally shots have been discussed in a previous study. However, the method is not practical enough because the auditory events should be manually annotated in advance to detect such scenes. Therefore, we propose an automatic rally shot detection based on shot clustering method using white line detection. Our system calculates the importance of rally shots based on audio features. As the result, the summarized video can facilitate users find and review the information they need. The result of experiments shows that our method is effective in an aspect of efficient video browsing experience. Furthermore, we propose a high-speed playback method customized to racquet sports video and realize more efficient video browsing experience.


motion in games | 2014

Automatic depiction of onomatopoeia in animation considering physical phenomena

Tsukasa Fukusato; Shigeo Morishima

This paper presents a method that enables the estimation and depiction of onomatopoeia in computer-generated animation based on physical parameters. Onomatopoeia is used to enhance physical characteristics and movement, and enables users to understand animation more intuitively. We experiment with onomatopoeia depiction in scenes within the animation process. To quantify onomatopoeia, we employ Komatsus [2012] assumption, i.e., onomatopoeia can be expressed by n-dimensional vector. We also propose phonetic symbol vectors based on the correspondence of phonetic symbols to the impressions of onomatopoeia using a questionnaire-based investigation. Furthermore, we verify the positioning of onomatopoeia in animated scenes. The algorithms directly combine phonetic symbols to estimate optimum onomatopoeia. They use a view-dependent Gaussian function to display onomatopoeias in animated scenes. Our method successfully recommends optimum onomatopoeias using only physical parameters, so that even amateur animators can easily create onomatopoeia animation.


international conference on computer graphics and interactive techniques | 2014

Pose-independent garment transfer

Fumiya Narita; Shunsuke Saito; Takuya Kato; Tsukasa Fukusato; Shigeo Morishima

Dressing virtual characters is necessary for many applications such as film and game. However, modeling clothing for characters is a significant bottleneck, because it requires manual effort to design clothing, position it correctly on the body, and adjusting the fitting. Therefore, even if we wish to design similar looking clothing for characters that have very different poses and shapes, we would need to repeat the tedious process practically from scratch. We then propose a method for automatic design-preserving transfer of clothing between characters in various poses and shapes. As shown in the results, our system enables us to automatically generate a clothing model for a target character.


eurographics | 2016

Garment transfer for quadruped characters

Fumiya Narita; Shunsuke Saito; Takuya Kato; Tsukasa Fukusato; Shigeo Morishima

Modeling clothing to characters is one of the most time-consuming tasks for artists in 3DCG animation production. Transferring existing clothing models is a simple and powerful solution to reduce labor. In this paper, we propose a method to generate a clothing model for various characters from a single template model. Our framework consists of three steps: scale measurement, clothing transformation, and texture preservation. By introducing a novel measurement of the scale deviation between two characters with different shapes and poses, our framework achieves pose-independent transfer of clothing even for quadrupeds (e.g., from human to horse). In addition to a plausible clothing transformation method based on the scale measurement, our method minimizes texture distortion resulting from large deformation. We demonstrate that our system is robust for a wide range of body shapes and poses, which is challenging for current state-of-the-art methods.


conference on multimedia modeling | 2016

Computational Cartoonist: A Comic-Style Video Summarization System for Anime Films

Tsukasa Fukusato; Tatsunori Hirai; Shunya Kawamura; Shigeo Morishima

This paper presents Computational Cartoonist, a comic-style anime summarization system that detects key frame and generates comic layout automatically. In contract to previous studies, we define evaluation criteria based on the correspondence between anime films and original comics to determine whether the result of comic-style summarization is relevant. To detect key frame detection for anime films, the proposed system segments the input video into a series of basic temporal units, and computes frame importance using image characteristics such as motion. Subsequently, comic-style layouts are decided on the basis of pre-defined templates stored in a database. Several results demonstrate the efficiency of our key frame detection over previous methods by evaluating the matching accuracy between key frames and original comic panels.


international conference on computer graphics theory and applications | 2015

Dance Motion Segmentation Method based on Choreographic Primitives

Narumi Okada; Naoya Iwamoto; Tsukasa Fukusato; Shigeo Morishima

Data-driven animation using a large human motion database enables the programing of various natural human motions. While the development of a motion capture system allows the acquisition of realistic human motion, segmenting the captured motion into a series of primitive motions for the construction of a motion database is necessary. Although most segmentation methods have focused on periodic motion, e.g., walking and jogging, segmenting non-periodic and asymmetrical motions such as dance performance, remains a challenging problem. In this paper, we present a specialized segmentation approach for human dance motion. Our approach consists of three steps based on the assumption that human dance motion is composed of consecutive choreographic primitives. First, we perform an investigation based on dancer perception to determine segmentation components. After professional dancers have selected segmentation sequences, we use their selected sequences to define rules for the segmentation of choreographic primitives. Finally, the accuracy of our approach is verified by a user-study, and we thereby show that our approach is superior to existing segmentation methods. Through three steps, we demonstrate automatic dance motion synthesis based on the choreographic primitives obtained.


international conference on computer graphics and interactive techniques | 2015

Texture preserving garment transfer

Fumiya Narita; Shunsuke Saito; Takuya Kato; Tsukasa Fukusato; Shigeo Morishima

Dressing virtual characters is necessary for many applications, while modeling clothing is a significant bottleneck. Therefore, it has been proposed that the idea of Garment Transfer for transfer-ring clothing model from one character to another character [Brouet et al. 2012]. In recent years, this idea has been extended to be applicable between characters in various poses and shapes [Narita et al. 2014]. However, texture design of clothing is not preserved in their method since they deform the source clothing model to fit the target body (see Figure 1(a)(c)).


international conference on computer graphics and interactive techniques | 2015

BGMaker: example-based anime background image creation from a photograph

Shugo Yamaguchi; Chie Furusawa; Takuya Kato; Tsukasa Fukusato; Shigeo Morishima

Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.


international conference on computer graphics and interactive techniques | 2014

Quasi 3D rotation for hand-drawn characters

Chie Furusawa; Tsukasa Fukusato; Narumi Okada; Tatsunori Hirai; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 Quasi 3D Rotation for Hand-Drawn Characters

Collaboration


Dive into the Tsukasa Fukusato's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge