Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shigeo Morishima is active.

Publication


Featured researches published by Shigeo Morishima.


Computer Graphics Forum | 2015

Multi-layer Lattice Model for Real-Time Dynamic Character Deformation

Naoya Iwamoto; Hubert P. H. Shum; Longzhi Yang; Shigeo Morishima

Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.


international conference on computer graphics and interactive techniques | 2014

Efficient video viewing system for racquet sports with automatic summarization focusing on rally scenes

Shunya Kawamura; Tsukasa Fukusato; Tatsunori Hirai; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 Efficient Video Viewing System for Racquet Sports with Automatic Summarization Focusing on Rally Scenes


international conference on computer graphics and interactive techniques | 2016

Automatic dance generation system considering sign language information

Wakana Asahina; Naoya Iwamoto; Hubert P. H. Shum; Shigeo Morishima

In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like running or jump, so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.


international conference on computer graphics theory and applications | 2015

Dance Motion Segmentation Method based on Choreographic Primitives

Narumi Okada; Naoya Iwamoto; Tsukasa Fukusato; Shigeo Morishima

Data-driven animation using a large human motion database enables the programing of various natural human motions. While the development of a motion capture system allows the acquisition of realistic human motion, segmenting the captured motion into a series of primitive motions for the construction of a motion database is necessary. Although most segmentation methods have focused on periodic motion, e.g., walking and jogging, segmenting non-periodic and asymmetrical motions such as dance performance, remains a challenging problem. In this paper, we present a specialized segmentation approach for human dance motion. Our approach consists of three steps based on the assumption that human dance motion is composed of consecutive choreographic primitives. First, we perform an investigation based on dancer perception to determine segmentation components. After professional dancers have selected segmentation sequences, we use their selected sequences to define rules for the segmentation of choreographic primitives. Finally, the accuracy of our approach is verified by a user-study, and we thereby show that our approach is superior to existing segmentation methods. Through three steps, we demonstrate automatic dance motion synthesis based on the choreographic primitives obtained.


international conference on computer graphics and interactive techniques | 2015

Automatic facial animation generation system of dancing characters considering emotion in dance and music

Wakana Asahina; Narumi Okada; Naoya Iwamoto; Taro Masuda; Tsukasa Fukusato; Shigeo Morishima

In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing characters emotions (we call dance emotion). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model cant express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose dance emotion model to visualize dancing characters emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.


international conference on multimedia retrieval | 2018

Face Retrieval Framework Relying on User's Visual Memory

Yugo Sato; Tsukasa Fukusato; Shigeo Morishima

This paper presents an interactive face retrieval framework for clarifying an image representation envisioned by a user. Our system is designed for a situation in which the user wishes to find a person but has only visual memory of the person. We address a critical challenge of image retrieval across the users inputs. Instead of target-specific information, the user can select several images (or a single image) that are similar to an impression of the target person the user wishes to search for. Based on the users selection, our proposed system automatically updates a deep convolutional neural network. By interactively repeating these process (human-in-the-loop optimization), the system can reduce the gap between human-based similarities and computer-based similarities and estimate the target image representation. We ran user studies with 10 subjects on a public database and confirmed that the proposed framework is effective for clarifying the image representation envisioned by the user easily and quickly.


human factors in computing systems | 2018

Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos

Seita Kayukawa; Keita Higuchi; Ryo Yonetani; Masanori Nakamura; Yoichi Sato; Shigeo Morishima

This work presents the Dynamic Object Scanning (DO-Scanning), a novel interface that helps users browse long and untrimmed first-person videos quickly. The proposed interface offers users a small set of object cues generated automatically tailored to the context of a given video. Users choose which cue to highlight, and the interface in turn adaptively fast-forwards the video while keeping scenes with highlighted cues played at original speed. Our experimental results have revealed that the DO-Scanning has an efficient and compact set of cues arranged dynamically and this set of cues is useful for browsing a diverse set of first-person videos.


designing interactive systems | 2018

Placing Music in Space: A Study on Music Appreciation with Spatial Mapping

Shoki Miyagawa; Yuki Koyama; Jun Kato; Masataka Goto; Shigeo Morishima

We investigate the potential of music appreciation using spatial mapping techniques, which allow us to place audio sources in various locations within a physical space. We consider possible ways of this new appreciation style and list some design variables, such as how to define coordinate systems, how to show visually, and how to place the sound sources. We conducted an exploratory user study to examine how these design variables affect users music listening experiences. Based on our findings from the study, we discuss how we should develop systems that incorporate these design variables for music appreciation in the future.


eurographics | 2016

Real-time rendering of heterogeneous translucent objects using voxel number map

Keisuke Mochida; Midori Okamoto; Hiroyuki Kubo; Shigeo Morishima

Rendering of tranlucent objects enhances the reality of computer graphics, however, it is still computationally expensive. In this paper, we introduce a real-time rendering technique for heterogeneous translucent objects that contain complex structure inside. First, for the precomputation, we convert mesh models into voxels and generate Look-Up-Table in which the optical thickness between two surface voxels is stored. Second, we compute radiance in real-time using the precomputed optical thickness. At this time, we generate Voxel-Number-Map to fetch the texel value of the Look-Up-Table in GPU. Using Look-Up-Table and Voxel-Number-Map, our method can render translucent objects with cavities and different media inside in real-time.


international conference on computer graphics and interactive techniques | 2014

The efficient and robust sticky viscoelastic material simulation

Kakuto Goto; Naoya Iwamoto; Shunsuke Saito; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 The Efficient and Robust Sticky Viscoelastic Material Simulation

Collaboration


Dive into the Shigeo Morishima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masataka Goto

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge