Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuki Koyama is active.

Publication


Featured researches published by Yuki Koyama.


international conference on computer graphics and interactive techniques | 2014

Pteromys: interactive design and optimization of free-formed free-flight model airplanes

Nobuyuki Umetani; Yuki Koyama; Ryan Schmidt; Takeo Igarashi

This paper introduces novel interactive techniques for designing original hand-launched free-flight glider airplanes which can actually fly. The aerodynamic properties of a glider aircraft depend on their shape, imposing significant design constraints. We present a compact and efficient representation of glider aerodynamics that can be fit to real-world conditions using a data-driven method. To do so, we acquire a sample set of glider flight trajectories using a video camera and the system learns a nonlinear relationship between forces on the wing and wing shape. Our acquisition system is much simpler to construct than a wind tunnel, but using it we can efficiently discover a wing model for simple gliding aircraft. Our resulting model can handle general free-form wing shapes and yet agrees sufficiently well with the acquired airplane flight trajectories. Based on this compact aerodynamics model, we present a design tool in which the wing configuration created by a user is interactively optimized to maximize flight-ability. To demonstrate the effectiveness of our tool for glider design by novice users, we compare it with a traditional design workflow.


international conference on computer graphics and interactive techniques | 2015

AutoConnect: computational design of 3D-printable connectors

Yuki Koyama; Shinjiro Sueda; Emma Steinhardt; Takeo Igarashi; Ariel Shamir; Wojciech Matusik

We present AutoConnect, an automatic method that creates customized, 3D-printable connectors attaching two physical objects together. Users simply position and orient virtual models of the two objects that they want to connect and indicate some auxiliary information such as weight and dimensions. Then, AutoConnect creates several alternative designs that users can choose from for 3D printing. The design of the connector is created by combining two holders, one for each object. We categorize the holders into two types. The first type holds standard objects such as pipes and planes. We utilize a database of parameterized mechanical holders and optimize the holder shape based on the grip strength and material consumption. The second type holds free-form objects. These are procedurally generated shell-gripper designs created based on geometric analysis of the object. We illustrate the use of our method by demonstrating many examples of connectors and practical use cases.


symposium on computer animation | 2012

Real-time example-based elastic deformation

Yuki Koyama; Kenshi Takayama; Nobuyuki Umetani; Takeo Igarashi

We present an example-based elastic deformation method that runs in real time. Example-based elastic deformation was originally presented by Martin et al. [MTGG11], where an artist can intuitively control elastic material behaviors by simply giving example poses. Their FEM-based approach is, however, computationally expensive requiring nonlinear optimization, which hinders its use in real-time applications such as games. Our contribution is to formulate an analogous concept using the shape matching framework, which is fast, robust, and easy to implement. The key observation is that each overlapping local regions right stretch tensor obtained by polar decomposition is a natural choice for a deformation descriptor. This descriptor allows us to represent the pose space as a linear blending of examples. At each time step, the current deformation descriptor is linearly projected onto the example manifold, and then used to modify the rest shape of each local region when computing goal positions. Our approach is two orders of magnitude faster than Martin et al.s approach while producing comparable example-based elastic deformations.


user interface software and technology | 2014

Crowd-powered parameter analysis for visual design exploration

Yuki Koyama; Daisuke Sakamoto; Takeo Igarashi

Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such high-dimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the users design exploration.


human factors in computing systems | 2016

SelPh: Progressive Learning and Support of Manual Photo Color Enhancement

Yuki Koyama; Daisuke Sakamoto; Takeo Igarashi

Color enhancement is a very important aspect of photo editing. Even when photographers have tens of or hundreds of photographs, they must enhance each photo one by one by manually tweaking sliders in software such as brightness and contrast, because automatic color enhancement is not always satisfactory for them. To support this repetitive manual task, we present self-reinforcing color enhancement, where the system implicitly and progressively learns the users preferences by training on their photo editing history. The more photos the user enhances, the more effectively the system supports the user. We present a working prototype system called SelPh, and then describe the algorithms used to perform the self-reinforcement. We conduct a user study to investigate how photographers would use a self-reinforcing system to enhance a collection of photos. The results indicate that the participants were satisfied with the proposed system and strongly agreed that the self-reinforcing approach is preferable to the traditional workflow.


symposium on computer animation | 2013

View-dependent control of elastic rod simulation for 3D character animation

Yuki Koyama; Takeo Igarashi

This paper presents view-dependent control of elastic rod simulation for 3D character animation. Elastic rod simulation is often used in character animation to generate motion of passively deforming body parts such as hair, ear, and whiskers. Our goal is to allow artistic control of the simulation in a view-dependent way, for example to move a hair strand so that it does not hide the eye regardless of the view direction. To achieve this goal, the artist defines several example rest poses of the rod in preparation, each of which is associated with a particular view direction. In run time, the system computes the current rest pose by blending the example rest poses associated with the view directions near the current view direction, and then pulls the pose to the current rest pose. Technical contribution is in the formulation of example-based rod simulation using view direction as an input, and an algorithm to suppress undesirable increase of momentum caused by dynamically changing rest poses.


Computer Graphics Forum | 2018

Decomposing Images into Layers with Advanced Color Blending

Yuki Koyama; Masataka Goto

Digital paintings are often created by compositing semi‐transparent layers using various advanced color‐blend modes, such as “color‐burn,” “multiply,” and “screen,” which can produce interesting non‐linear color effects. We propose a method of decomposing an input image into layers with such advanced color blending. Unlike previous layer‐decomposition methods, which typically support only linear color‐blend modes, ours can handle any user‐specified color‐blend modes. To enable this, we generalize a previous color‐unblending formulation, in which only a specific layering model was considered. We also introduce several techniques for adapting our generalized formulation to practical use, such as the post‐processing for refining smoothness. Our method lets users explore possible decompositions to find the one that matches for their purposes by manipulating the target color‐blend mode and desired color distribution for each layer, as well as the number of layers. Thus, the output of our method is a layered, easily editable image composition organized in a way that digital artists are familiar with. Our method is useful for remixing existing illustrations, flexibly editing single‐layer paintings, and bringing physically painted media (e.g., oil paintings) into a digital workflow.


user interface software and technology | 2016

Computational Design Driven by Aesthetic Preference

Yuki Koyama

Tweaking design parameters is one of the most fundamental tasks in many design domains. In this paper, we describe three computational design methods for parameter tweaking tasks in which aesthetic preference---how aesthetically preferable the design looks---is used as a criterion to be maximized. The first method estimates a preference distribution in the target parameter space using crowdsourced human computation. The estimated preference distribution is then used in a design interface to facilitate interactive design exploration. The second method also estimates a preference distribution and uses it in an interface, but the distribution is estimated using the editing history of the target user. In contrast to these two methods, the third method automatically finds the best parameter that maximizes aesthetic preference, without requiring the user of this method to manually tweak parameters. This is enabled by implementing optimization algorithms using crowdsourced human computation. We validated these methods mainly in the scenario of photo color enhancement where parameters, such as brightness and contrast, need to be tweaked.


pacific conference on computer graphics and applications | 2016

An interactive design system of free-formed bamboo-copters

Morihiro Nakamura; Yuki Koyama; Daisuke Sakamoto; Takeo Igarashi

We present an interactive design system for designing free‐formed bamboo‐copters, where novices can easily design free‐formed, even asymmetric bamboo‐copters that successfully fly. The designed bamboo‐copters can be fabricated using digital fabrication equipment, such as a laser cutter. Our system provides two useful functions for facilitating this design activity. First, it visualizes a simulated flight trajectory of the current bamboo‐copter design, which is updated in real time during the users editing. Second, it provides an optimization function that automatically tweaks the current bamboo‐copter design such that the spin quality—how stably it spins—and the flight quality—how high and long it flies—are enhanced. To enable these functions, we present non‐trivial extensions over existing techniques for designing free‐formed model airplanes [ UKSI14 ], including a wing discretization method tailored to free‐formed bamboo‐copters and an optimization scheme for achieving stable bamboo‐copters considering both spin and flight qualities.


eurographics | 2016

Interactive deformation of structurally complex heart models constructed from medical images

Kazutaka Nakashima; Yuki Koyama; Takeo Igarashi; Takashi Ijiri; Shin Inada; Kazuo Nakazawa

We present a data structure for interactive deformation of complicated organ models, such as hearts, and a technique for automatically constructing the data structure from given medical images. The data structure is a dual model comprising of a graph structure for elastic simulation and a surface mesh for visualization. The system maps the simulation results to the mesh using a skinning technique. First, the system generates a dense graph and mesh from input medical images; then, it independently reduces them. Finally, the system establishes correspondence between the reduced graph and mesh by backtracking the reduction process. We also present an interactive browser for exploring heart shapes, and report initial feedback from target users.

Collaboration


Dive into the Yuki Koyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge