Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoshi Iizuka is active.

Publication


Featured researches published by Satoshi Iizuka.


international conference on computer graphics and interactive techniques | 2016

Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification

Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa

We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.


ACM Transactions on Graphics | 2017

Globally and locally consistent image completion

Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.


international conference on computer graphics and interactive techniques | 2016

Learning to simplify: fully convolutional networks for rough sketch cleanup

Edgar Simo-Serra; Satoshi Iizuka; Kazuma Sasaki; Hiroshi Ishikawa

In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.


The Visual Computer | 2011

An interactive design system for pop-up cards with a physical simulation

Satoshi Iizuka; Yuki Endo; Jun Mitani; Yoshihiro Kanamori; Yukio Fukui

We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.


Computer Graphics Forum | 2016

DeepProp: extracting deep features from a single image for edit propagation

Yuki Endo; Satoshi Iizuka; Yoshihiro Kanamori; Jun Mitani

Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire image based on similarity of image features. In most previous work, users must manually determine the importance of each image feature (e.g., color, coordinates, and textures) in accordance with their needs and target images. We focus on representation learning that automatically learns feature representations only from user strokes in a single image instead of tuning existing features manually. To this end, this paper proposes an edit propagation method using a deep neural network (DNN). Our DNN, which consists of several layers such as convolutional layers and a feature combiner, extracts stroke‐adapted visual features and spatial features, and then adjusts the importance of them. We also develop a learning algorithm for our DNN that does not suffer from the vanishing gradient problem, and hence avoids falling into undesirable locally optimal solutions. We demonstrate that edit propagation with deep features, without manual feature tuning, can achieve better results than previous work.


IEEE Computer Graphics and Applications | 2012

Efficiently Modeling 3D Scenes from a Single Image

Satoshi Iizuka; Yoshihiro Kanamori; Jun Mitani; Yukio Fukui

A proposed system lets users create a 3D scene easily and quickly from a single image. The scene model consists of background and foreground objects whose coordinates the system calculates on the basis of a boundary between the ground plane and a wall plane. The system quickly extracts foreground objects by combining image segmentation and graph-cut-based optimization. It enables efficient modeling of foreground objects, easy creation of their textures, and rapid construction of scene models that are simple but produce sufficient 3D effects.


Computer Graphics Forum | 2014

Efficient Depth Propagation for Constructing a Layered Depth Image from a Single Image

Satoshi Iizuka; Yuki Endo; Yoshihiro Kanamori; Jun Mitani; Yukio Fukui

In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.


international conference on computer graphics and interactive techniques | 2018

Mastering Sketching: Adversarial Augmentation for Structured Prediction

Edgar Simo-Serra; Satoshi Iizuka; Hiroshi Ishikawa

We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is real training data or the output of the simplification network, which, in turn, tries to fool it. This approach has two major advantages: first, because the discriminator network learns the structure in line drawings, it encourages the output sketches of the simplification network to be more similar in appearance to the training sketches. Second, we can also train the networks with additional unsupervised data: by adding rough sketches and line drawings that are not corresponding to each other, we can improve the quality of the sketch simplification. Thanks to a difference in the architecture, our approach has advantages over similar adversarial training approaches in stability of training and the aforementioned ability to utilize unsupervised training data. We show how our framework can be used to train models that significantly outperform the state of the art in the sketch simplification task, despite using the same architecture for inference. We also present an approach to optimize for a single image, which improves accuracy at the cost of additional computation time. Finally, we show that, using the same framework, it is possible to train the network to perform the inverse problem, i.e., convert simple line sketches into pencil drawings, which is not possible using the standard mean squared error loss. We validate our framework with two user tests, in which our approach is preferred to the state of the art in sketch simplification 88.9% of the time.


computer vision and pattern recognition | 2017

Joint Gap Detection and Inpainting of Line Drawings

Kazuma Sasaki; Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa

We propose a novel data-driven approach for automatically detecting and completing gaps in line drawings with a Convolutional Neural Network. In the case of existing inpainting approaches for natural images, masks indicating the missing regions are generally required as input. Here, we show that line drawings have enough structures that can be learned by the CNN to allow automatic detection and completion of the gaps without any such input. Thus, our method can find the gaps in line drawings and complete them without user interaction. Furthermore, the completion realistically conserves thickness and curvature of the line segments. All the necessary heuristics for such realistic line completion are learned naturally from a dataset of line drawings, where various patterns of line completion are generated on the fly as training pairs to improve the model generalization. We evaluate our method qualitatively on a diverse set of challenging line drawings and also provide quantitative results with a user study, where it significantly outperforms the state of the art.


Computer Graphics Forum | 2016

Single image weathering via exemplar propagation

Satoshi Iizuka; Yuki Endo; Yoshihiro Kanamori; Jun Mitani

This paper presents an efficient approach for generating weathering effects with detailed appearance variations in a single image. Previous approaches merely change chroma or reflectance of weathered objects, which is not sufficient for materials with detailed shading and texture variations, such as growing moss and peeling plaster. Our method propagates such detailed features via seamless patch‐based synthesis driven by weathering degree distribution. Unlike previous methods, the weathering degrees are calculated efficiently using Radial Basis Functions even for materials with wide color variations. We use graph cut‐based optimization to identify the most weathered region as a “weathering exemplar”, from which we sample weathering patches. We demonstrate our method enables us to generate various types of detailed weathering effects interactively.

Collaboration


Dive into the Satoshi Iizuka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuki Endo

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge