Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuma Sasaki is active.

Publication


Featured researches published by Kazuma Sasaki.


international conference on computer graphics and interactive techniques | 2016

Learning to simplify: fully convolutional networks for rough sketch cleanup

Edgar Simo-Serra; Satoshi Iizuka; Kazuma Sasaki; Hiroshi Ishikawa

In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.


international conference on robotics and automation | 2017

Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning

Pin-Chu Yang; Kazuma Sasaki; Kanata Suzuki; Kei Kase; Shigeki Sugano; Tetsuya Ogata

We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The “Nextage Open” humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.


intelligent robots and systems | 2015

Neural network based model for visual-motor integration learning of robot's drawing behavior: Association of a drawing motion from a drawn image

Kazuma Sasaki; Hadi Tjandra; Kuniaki Noda; Kuniyuki Takahashi; Tetsuya Ogata

In this study, we propose a neural network based model for learning a robots drawing sequences in an unsupervised manner. We focus on the ability to learn visual-motor relationships, which can work as a reusable memory in association of drawing motion from a picture image. Assuming that a humanoid robot can draw a shape on a pen tablet, the proposed model learns drawing sequences, which comprises drawing motion and drawn picture image frames. To learn raw pixel data without any given specific features, we utilized a deep neural network for compressing large dimensional picture images and a continuous time recurrent neural network for integration of motion and picture images. To confirm the ability of the proposed model, we performed an experiment for learning 15 sequences comprising three types of shapes. The model successfully learns all the sequences and can associate a drawing motion from a not trained picture image and a trained picture with similar success. We also show that the proposed model self-organizes its behavior according to types shapes.


computer vision and pattern recognition | 2017

Joint Gap Detection and Inpainting of Line Drawings

Kazuma Sasaki; Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa

We propose a novel data-driven approach for automatically detecting and completing gaps in line drawings with a Convolutional Neural Network. In the case of existing inpainting approaches for natural images, masks indicating the missing regions are generally required as input. Here, we show that line drawings have enough structures that can be learned by the CNN to allow automatic detection and completion of the gaps without any such input. Thus, our method can find the gaps in line drawings and complete them without user interaction. Furthermore, the completion realistically conserves thickness and curvature of the line segments. All the necessary heuristics for such realistic line completion are learned naturally from a dataset of line drawings, where various patterns of line completion are generated on the fly as training pairs to improve the model generalization. We evaluate our method qualitatively on a diverse set of challenging line drawings and also provide quantitative results with a user study, where it significantly outperforms the state of the art.


Robotics and Autonomous Systems | 2016

Visual motor integration of robot's drawing behavior using recurrent neural network

Kazuma Sasaki; Kuniaki Noda; Tetsuya Ogata

Abstract Drawing is a way of visually expressing our feelings, knowledge, and situation. People draw pictures to share information with other human beings. This study investigates visuomotor memory (VM), which is a reusable memory storing drawing behavioral data. We propose a neural network-based model for acquiring a computational memory that can replicate VM through self-organized learning of a robot’s actual drawing experiences. To design the model, we assume that VM has the following two characteristics: (1) it is formed by bottom-up learning and integration of temporal drawn pictures and motion data, and (2) it allows the observers to associate drawing motions from pictures. The proposed model comprises a deep neural network for dimensionally compressing temporal drawn images and a continuous-time recurrent neural network for integration learning of drawing motions and temporal drawn images. Two experiments are conducted on unicursal shape learning to investigate whether the proposed model can learn the function without any shape information for visual processing. Based on the first experiment, the model can learn 15 drawing sequences for three types of pictures, acquiring associative memory for drawing motions through the bottom-up learning process. Thus, it can associate drawing motions from untrained drawn images. In the second experiment, four types of pictures are trained, with four distorted variations per type. In this case, the model can organize the different shapes based on their distortions by utilizing both the image information and the drawing motions, even if visual characteristics are not shared.


The Visual Computer | 2018

Learning to restore deteriorated line drawing

Kazuma Sasaki; Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa

We propose a fully automatic approach to restore aged old line drawings. We decompose the task into two subtasks: the line extraction subtask, which aims to extract line fragments and remove the paper texture background, and the restoration subtask, which fills in possible gaps and deterioration of the lines to produce a clean line drawing. Our approach is based on a convolutional neural network that consists of two sub-networks corresponding to the two subtasks. They are trained as part of a single framework in an end-to-end fashion. We also introduce a new dataset consisting of manually annotated sketches by Leonardo da Vinci which, in combination with a synthetic data generation approach, allows training the network to restore deteriorated line drawings. We evaluate our method on challenging 500-year-old sketches and compare with existing approaches with a user study, in which it is found that our approach is preferred 72.7% of the time.


international conference on artificial neural networks | 2016

Classification of Photo and Sketch Images Using Convolutional Neural Networks

Kazuma Sasaki; Madoka Yamakawa; Kana Sekiguchi; Tetsuya Ogata

Content-Based Image Retrieval (CBIR) system enables us to access images using only images as queries, instead of keywords. Photorealistic images, and hand-drawn sketch image can be used as a queries as well. Recently, convolutional neural networks (CNNs) are used to discriminate images including sketches. However, the tasks are limited to classifying only one type of images, either photo or sketch images, due to the lack of a large dataset of sketch images and the large difference of their visual characteristics. In this paper, we introduce a simple way to prepare training datasets, which can enable the CNN model to classify both types of images by color transforming photo and illustration images. Through the training experiment, we show that the proposed method contributes to the improvement of classification accuracy.


international symposium on neural networks | 2018

End-to-End Visuomotor Learning of Drawing Sequences using Recurrent Neural Networks

Kazuma Sasaki; Tetsuya Ogata


arXiv: Computer Vision and Pattern Recognition | 2018

Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability

Zhihao Li; Toshiyuki Motoyoshi; Kazuma Sasaki; Tetsuya Ogata; Shigeki Sugano


IEEE Transactions on Cognitive and Developmental Systems | 2018

Adaptive Drawing Behavior by Visuomotor Learning Using Recurrent Neural Networks

Kazuma Sasaki; Tetsuya Ogata

Collaboration


Dive into the Kazuma Sasaki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge