Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masahiro Toyoura is active.

Publication


Featured researches published by Masahiro Toyoura.


The Visual Computer | 2012

Automatic generation of accentuated pencil drawing with saliency map and LIC

Michitaka Hata; Masahiro Toyoura; Xiaoyang Mao

An artist usually does not draw all the areas in a picture homogeneously but tries to make the work more expressive by emphasizing what is important while eliminating irrelevant details. Creating expressive painterly images with such accentuation effect remains to be a challenge because of the subjectivity of information selection. This paper presents a novel technique for automatically converting an input image into a pencil drawing with such emphasis and elimination effect. The proposed technique utilizes saliency map, a computational model for visual attention, to predict the focus of attention in the input image. A new level of detail controlling algorithm using multi-resolution pyramid is also developed for locally adapting the rendering parameters, such as the density, orientation and width of pencil strokes, to the degree of attention defined by saliency map. Experimental results show that the images generated with the proposed method present the visual effect similar to that of the real pencil drawing and can successfully direct the viewer’s attention toward the focus.


The Visual Computer | 2015

Hidden message in a deformation-based texture

Jiayi Xu; Xiaoyang Mao; Xiaogang Jin; Aubrey Jaffer; Shufang Lu; Li Li; Masahiro Toyoura

We present stego-texture, a unique texture synthesis method that allows users to deliver personalized messages with beautiful, decorative textures. Our approach was inspired by the success of recent work generating marbling textures using mathematical functions. We were able to transform an input image or a text message into an intricate texture by combining the seven basic, reversible functions provided in the system. Later, the input image or message could be recovered by reversing the process of these functions. During the design process, the parameters of operations were automatically recorded, encrypted and invisibly embedded into the final pattern to create a stego-texture. In this way, the receiver could extract the hidden message from the stego-texture without the need for extra information from the sender. To ensure that the delivered message is unnoticeably covered by the texture, we propose a new technique for automatically creating a background that is harmonious with the message based on a set of visual perception cues.


cyberworlds | 2013

Detecting Markers in Blurred and Defocused Images

Masahiro Toyoura; Haruhito Aruga; Matthew Turk; Xiaoyang Mao

Planar markers enable an augmented reality (AR) system to estimate the pose of objects from images containing them. However, conventional markers are difficult to detect in blurred or defocused images. We propose a new marker and a new detection and identification method that is designed to work under such conditions. The problem of conventional markers is that their patterns consist of high-frequency components such as sharp edges which are attenuated in blurred or defocused images. Our marker consists of a single low-frequency component. We call it a mono-spectrum marker. The mono-spectrum marker can be detected in real time with a GPU. In experiments, we confirm that the mono-spectrum marker can be accurately detected in blurred and defocused images in real time. Using these markers can increase the performance and robustness of AR systems and other vision applications that require detection or tracking of defined markers.


conference on multimedia modeling | 2013

Film Comic Generation with Eye Tracking

Tomoya Sawada; Masahiro Toyoura; Xiaoyang Mao

Automatic generation of film comic requires solving several challenging problems such as selecting important frames well conveying the whole story, trimming the frames to fit the shape of panels without corrupting the composition of original image and arranging visually pleasing speech balloons without hiding important objects in the panel. We propose a novel approach to the automatic generation of film comic. The key idea is to aggregate eye-tracking data and image features into a computational map, called iMap, for quantitatively measuring the importance of frames in terms of story content and user attention. The transition of iMap in time sequences provides the solution to frame selection. Word balloon arrangement and image trimming are realized as the results of optimizing the energy functions derived from the iMap.


digital identity management | 2007

Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method

Masahiro Toyoura; Masaaki Iiyama; Koh Kakusho; Michihiko Minoh

In this paper, we present a novel approach for extracting silhouettes by using a particular pattern that we call the random pattern. The volume intersection method reconstructs the shapes of 3D objects from their silhouettes obtained with multiple cameras. With the method, if some parts of the silhouettes are missed, the corresponding parts of the reconstructed shapes are also missed. When colors of the objects and the backgrounds are similar, many parts of the silhouettes are missed. We adopt random pattern backgrounds to extract correct silhouettes. The random pattern has many small regions with randomly-selected colors. By using the random pattern backgrounds, we can keep the rate of missing parts below a specified percentage, even for objects of unknown color. To refine the silhouettes, we detect and fill in the missing parts by integrating multiple images. From the images captured by multiple cameras used to observe the object, the objects colors can be estimated. The missing parts can be detected by comparing the objects color with its corresponding backgrounds color. In our experiments, we confirmed that this method effectively extracts silhouettes and reconstructs 3D shapes.


The Visual Computer | 2016

Retrieval of clothing images based on relevance feedback with focus on collar designs

Honglin Li; Masahiro Toyoura; Kazumi Shimizu; Wei Yang; Xiaoyang Mao

The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.


cyberworlds | 2014

Example-Based Automatic Caricature Generation

Wei Yang; Kouki Tajima; Jiayi Xu; Masahiro Toyoura; Xiaoyang Mao

Caricature is a popular artistic media widely used for effective communications. The fascination of caricature lies in its expressive depiction of a persons prominent features, which is usually realized through the so called exaggeration technique. This paper proposes a new example based automatic caricature generation system supporting the exaggeration of visual appearance features. The system comprises the construction of a learning database and the generation of caricatures. The construction of the learning database links the pairs of facial images and corresponding caricatures. Given an input face, the system automatically compute the feature vectors of facial parts and hairstyle, and search the learning database for the exaggerated parts by using the most prominent features. Experimental results show that our system can achieve the control over the degree of exaggeration and the exaggerated results can better represent the features of the subjects.


The Visual Computer | 2014

Mono-spectrum marker: an AR marker robust to image blur and defocus

Masahiro Toyoura; Haruhito Aruga; Matthew Turk; Xiaoyang Mao

Planar markers enable an augmented reality (AR) system to estimate the pose of objects from images containing them. However, conventional markers are difficult to detect in blurred or defocused images. We propose a new marker and a new detection and identification method that is designed to work under such conditions. The problem of conventional markers is that their patterns consist of high-frequency components such as sharp edges which are attenuated in blurred or defocused images. Our marker consists of a single low-frequency component. We call it a mono-spectrum marker. The mono-spectrum marker can be detected in real time with a GPU. In experiments, we confirm that the mono-spectrum marker can be accurately detected in blurred and defocused images in real time. Using these markers can increase the performance and robustness of AR systems and other vision applications that require detection or tracking of defined markers.


virtual reality continuum and its applications in industry | 2012

Mono-glass for providing distance information for people losing sight in one eye

Masahiro Toyoura; Kenji Kashiwagi; Atsushi Sugiura; Xiaoyang Mao

We propose mono-glass for providing distance information for people losing sight in one eye. We implemented a pilot system of mono-glass under careful consideration of precision, real-time processing and intuitive presentation. The loss of sight in one eye disables binocular disparity processing and makes short-range activities difficult. The proposed mono-glass is a wearable device with two cameras and one display. The two cameras capture the images on behalf of users eyes. Depth information is then reconstructed from the captured images and visualized with defocusing for the healthy eye. Experimental results supported that our system could represent depth information in single-channel images.


conference on multimedia modeling | 2012

Film comic reflecting camera-works

Masahiro Toyoura; Mamoru Kunihiro; Xiaoyang Mao

We propose a novel technique for automatically creating film comics reflecting the camera-works of an original movie. Camera-works are one of the most important effects contributing to the mise en scene of the movie. A skilled director can use the camera-works dexterously for drawing the attention of audiences, representing sentiments, and give a change of pace in the movie. When creating film comics, camera-works are detected from the original movie, and mapped to panels and layouts of special comic styles. The technique is called as the grammar of manga . Our new algorithm is presented for automatically tiling the stylized panels into comic pages based on the grammar of manga. The results of our subject study show that reflecting camera-works in film comics enables the stories being presented in a more readable, vivid and immersive way.

Collaboration


Dive into the Masahiro Toyoura's collaboration.

Top Co-Authors

Avatar

Xiaoyang Mao

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar

Koh Kakusho

Kwansei Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Wei Yang

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiayi Xu

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Honglin Li

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noriyuki Abe

University of Yamanashi

View shared research outputs
Researchain Logo
Decentralizing Knowledge