Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihiro Kanamori is active.

Publication


Featured researches published by Yoshihiro Kanamori.


Computer Graphics Forum | 2008

GPU-based Fast Ray Casting for a Large Number of Metaballs

Yoshihiro Kanamori; Zoltan Szego; Tomoyuki Nishita

Metaballs are implicit surfaces widely used to model curved objects, represented by the isosurface of a density field defined by a set of points. Recently, the results of particle‐based simulations have been often visualized using a large number of metaballs, however, such visualizations have high rendering costs. In this paper we propose a fast technique for rendering metaballs on the GPU. Instead of using polygonization, the isosurface is directly evaluated in a per‐pixel manner. For such evaluation, all metaballs contributing to the isosurface need to be extracted along each viewing ray, on the limited memory of GPUs. We handle this by keeping a list of metaballs contributing to the isosurface and efficiently update it. Our method neither requires expensive precomputation nor acceleration data structures often used in existing ray tracing techniques. With several optimizations, we can display a large number of moving metaballs quickly.


Computer Graphics Forum | 2008

Real-time Animation of Sand-Water Interaction

Witawat Rungjiratananon; Zoltan Szego; Yoshihiro Kanamori; Tomoyuki Nishita

Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, in the case of solid‐fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials.


Computer Graphics Forum | 2010

Chain Shape Matching for Simulating Complex Hairstyles

Witawat Rungjiratananon; Yoshihiro Kanamori; Tomoyuki Nishita

Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine‐scale motion of individual hair strands. Although a recent mass‐spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine‐scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physically based, our GPU‐based simulator achieves visually plausible animations consisting of several tens of thousands of hair strands at interactive rates.


Computer Graphics Forum | 2012

Wetting Effects in Hair Simulation

Witawat Rungjiratananon; Yoshihiro Kanamori; Tomoyuki Nishita

There is considerable recent progress in hair simulations, driven by the high demands in computer animated movies. However, capturing the complex interactions between hair and water is still relatively in its infancy. Such interactions are best modeled as those between water and an anisotropic permeable medium as water can flow into and out of the hair volume biased in hair fiber direction. Modeling the interaction is further challenged when the hair is allowed to move. In this paper, we introduce a simulation model that reproduces interactions between water and hair as a dynamic anisotropic permeable material. We utilize an Eulerian approach for capturing the microscopic porosity of hair and handle the wetting effects using a Cartesian bounding grid. A Lagrangian approach is used to simulate every single hair strand including interactions with each other, yielding fine‐detailed dynamic hair simulation. Our model and simulation generate many interesting effects of interactions between fine‐detailed dynamic hair and water, i.e., water absorption and diffusion, cohesion of wet hair strands, water flow within the hair volume, water dripping from the wet hair strands and morphological shape transformations of wet hair.


The Visual Computer | 2011

An interactive design system for pop-up cards with a physical simulation

Satoshi Iizuka; Yuki Endo; Jun Mitani; Yoshihiro Kanamori; Yukio Fukui

We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.


Computer Graphics Forum | 2016

DeepProp: extracting deep features from a single image for edit propagation

Yuki Endo; Satoshi Iizuka; Yoshihiro Kanamori; Jun Mitani

Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire image based on similarity of image features. In most previous work, users must manually determine the importance of each image feature (e.g., color, coordinates, and textures) in accordance with their needs and target images. We focus on representation learning that automatically learns feature representations only from user strokes in a single image instead of tuning existing features manually. To this end, this paper proposes an edit propagation method using a deep neural network (DNN). Our DNN, which consists of several layers such as convolutional layers and a feature combiner, extracts stroke‐adapted visual features and spatial features, and then adjusts the importance of them. We also develop a learning algorithm for our DNN that does not suffer from the vanishing gradient problem, and hence avoids falling into undesirable locally optimal solutions. We demonstrate that edit propagation with deep features, without manual feature tuning, can achieve better results than previous work.


IEEE Computer Graphics and Applications | 2012

Efficiently Modeling 3D Scenes from a Single Image

Satoshi Iizuka; Yoshihiro Kanamori; Jun Mitani; Yukio Fukui

A proposed system lets users create a 3D scene easily and quickly from a single image. The scene model consists of background and foreground objects whose coordinates the system calculates on the basis of a boundary between the ground plane and a wall plane. The system quickly extracts foreground objects by combining image segmentation and graph-cut-based optimization. It enables efficient modeling of foreground objects, easy creation of their textures, and rapid construction of scene models that are simple but produce sufficient 3D effects.


eurographics | 2010

Motion Blur for EWA Surface Splatting

Simon Heinzle; Johanna Wolf; Yoshihiro Kanamori; Tim Weyrich; Tomoyuki Nishita; Markus H. Gross

This paper presents a novel framework for elliptical weighted average (EWA) surface splatting with time‐varying scenes. We extend the theoretical basis of the original framework by replacing the 2D surface reconstruction filters by 3D kernels which unify the spatial and temporal component of moving objects. Based on the newly derived mathematical framework we introduce a rendering algorithm that supports the generation of high‐quality motion blur for point‐based objects using a piecewise linear approximation of the motion. The rendering algorithm applies ellipsoids as rendering primitives which are constructed by extending planar EWA surface splats into the temporal dimension along the instantaneous motion vector. Finally, we present an implementation of the proposed rendering algorithm with approximated occlusion handling using advanced features of modern GPUs and show its capability of producing motion‐blurred result images at interactive frame rates.


Computer Graphics Forum | 2014

Efficient Depth Propagation for Constructing a Layered Depth Image from a Single Image

Satoshi Iizuka; Yuki Endo; Yoshihiro Kanamori; Jun Mitani; Yukio Fukui

In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.


cyberworlds | 2014

Image-Based Virtual Fitting System with Garment Image Reshaping

Hiroki Yamada; Masaki Hirose; Yoshihiro Kanamori; Jun Mitani; Yukio Fukui

We propose an image-based virtual fitting system to reproduce the appearance of fitting during online shopping for garments. Inputs are whole-body images of a fashion model and the customer. We create a garment image by cutting out the garment portion from the image of the fashion model. If the garment images were naïvely superimposed, the fitting result would look strange mainly because the shape of the garment will not match the body shape of the customer. In this paper, we therefore propose a method of reshaping the garment image based on human body shapes of the fashion model and the customer to make the fitting result more realistic. The body shape is automatically estimated from the contours of the human body, and can easily be retouched if necessary. The fitting result is refined further by automatic color correction with reference to the facial regions and a method of retouching parts that protrude from the rear of the garment image. We verified the effectiveness of our system through a user test.

Collaboration


Dive into the Yoshihiro Kanamori's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoyuki Nishita

Hiroshima Shudo University

View shared research outputs
Top Co-Authors

Avatar

Yuki Endo

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Man Zhang

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar

Yan Zhao

University of Tsukuba

View shared research outputs
Researchain Logo
Decentralizing Knowledge