Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihiro Sugaya is active.

Publication


Featured researches published by Yoshihiro Sugaya.


Dentomaxillofacial Radiology | 2014

Tooth shape reconstruction from dental CT images with the region-growing method.

Ryuichi Yanagisawa; Yoshihiro Sugaya; Shin Kasahara; Shinichiro Omachi

OBJECTIVES The three-dimensional shape information of teeth provides useful information. However, obtaining accurate three-dimensional shapes of teeth is difficult without extracting them physically. In this study, we aimed to develop a method for automatically extracting accurate three-dimensional shapes of teeth from dental CT images. METHODS The proposed method includes pre-processing and region extraction. Pre-processing is a combination of image-processing techniques that enhances tooth regions. In the region-extraction process, the region-growing method is introduced for extracting a region of each tooth. Constraint conditions determined by considering the characteristics of the structure of teeth are introduced for accurate extraction. Finally, morphological image processing is applied for eliminating discontinuous points. RESULTS We carried out an experiment in which the three-dimensional shapes of teeth were reconstructed from dental CT images. Quantitative evaluation was performed by measuring the three-dimensional spatial accordance rates between the region obtained by the proposed method and the manually extracted region. The proposed method was significantly more accurate than an existing method at the 5% level. CONCLUSIONS The experimental results showed that the proposed method reconstructs the shapes of teeth with high precision. However, an unextracted region remained at the surface of the enamel. Solving this problem and improving the extraction accuracy are important topics for future work.


advanced information networking and applications | 2008

Long-Term CPU Load Prediction System for Scheduling of Distributed Processes and its Implementation

Yoshihiro Sugaya; Hiroshi Tatsumi; Mitiharu Kobayashi; Hirotomo Aso

There exist distributed processing environments composed of many heterogeneous computers. It is required to schedule distributed parallel processes in an appropriate manner. For the scheduling, prediction of execution load of a process is effective to exploit resources of environments. We propose long-term load prediction methods with references of properties of processes and of runtime predictions. Since an appropriate prediction method is different according to the situation, we propose a prediction module selection to select an appropriate prediction method according to a state of changing CPU load using a neural network. We also discuss about the implementation of a long-term CPU load prediction system, which provides information including prediction of load for schedulers, system administrators and users.


international conference on consumer electronics | 2015

Ultra-low resolution character recognition system with pruning mutual subspace method

Shuhei Toba; Hirotaka Kudo; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

Improvement of character recognition technology brings us various character recognition applications for mobile camera. However, many low-resolution and poor-quality character images exist due to the performance of the camera or the influence of environment, and existing methods are not good at recognizing those low-resolution characters. Therefore, we develop a character recognition system for ultra-low resolution character images less than 20*20 pixels. The proposed system consists of three phases: increased training data with a generative learning method, creating a deblurred high-resolution image with Wiener filter and image alignment, and recognition by pruning Mutual Subspace Method.


Journal of Information Processing | 2016

Traffic Light Detection Considering Color Saturation Using In-Vehicle Stereo Camera

Hiroki Moizumi; Yoshihiro Sugaya; Masako Omachi; Shinichiro Omachi

One of the major causes of traffic accidents according to the statistical report on traffic accidents in Japan is the disregard of traffic lights by drivers. It would be useful if driving support systems could detect and recognize traffic lights and give appropriate information to drivers. Although many studies on intelligent transportation systems have been conducted, the detection of traffic lights using images remains a difficult problem. This is because traffic lights are very small as compared to other objects and there are many objects similar to traffic lights in the road environment. In addition, the pixel colors of traffic lights are easily over-saturated, which renders traffic light detection using color information difficult. The rapid deployment of the new LED traffic lights has led to a new problem. Since LED lights blink at high frequency, if they are captured by a digital video camera, there are frames in which all the traffic lights appear to be turned off. It is impossible to detect traffic lights in these frames by searching the ordinary color of traffic lights. In this paper, we focus on the stable detection of traffic lights, even when they are blinking or when their colors are over-saturated. A method for detecting candidate traffic lights utilizing intensity information together with color information is proposed for handling over-saturated pixels. To exclude candidates that are not traffic lights efficiently, the sizes of the detected candidates are calculated using a stereo image. In addition, we introduce tracking with a Kalman filter to avoid incorrect detection and achieve stable detection of blinking lights. The experimental results using video sequences taken by an in-vehicle stereo camera verify the efficacy of the proposed approaches.


international conference on consumer electronics | 2015

Estimation of gazing points in environment using eye tracker and omnidirectional camera

Shun Chiba; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

In this work, we propose a method for estimating the users gazing point in the environment using images taken by an eye tracker and an omnidirectional camera. The proposed method estimates the eye positon in environment by mapping the gazing point obtained by the eye tracker in the omnidirectional camera image. However, matching the omnidirectional image and the eye tracker image is difficult because the omnidirectional image is distorted by equirectangular projection. Therefore, we propose a method for estimating eye location in the omnidirectional image by matching the eye tracker image to the omnidirectional image with considering the distortion. Specifically, this method repeats image matching and image conversion using the matching results.


Journal of Sensor and Actuator Networks | 2018

Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems

Shun Chiba; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment.


international conference on indoor positioning and indoor navigation | 2017

Analysis of floor map image in information board for indoor navigation

Tomoya Honto; Yoshihiro Sugaya; Tomo Miyazaki; Shinichiro Omachi

Various indoor navigation methods have been developed recently, but digitalized data of indoor map is not always available. Therefore, an indoor navigation framework using an image of information board has been proposed. In this method, the process to extract map regions from the image of an information board is necessary to be done by hands beforehand, and the process to estimate passageway regions is important because its information is used in map matching. However, the method of passageway discrimination is very heuristic, which is intended for a specific type of floor maps. Therefore, in this paper, we propose a semi-automatic method to extract map regions from the image of information board with simple users operation. We use GrabCut method and Snakes method for the extraction method. In GrabCut method, we detect closed regions to prevent the degradation of accuracy when conducting GrabCut to the downsizing image. The proposed method can extract a map region with few deficits in short calculation time. In addition, we propose a machine learning based method to classify passageway regions and other regions from a segment image. We confirmed that the proposed methods are effective and promising by experiments.


Proceedings of the 4th International Workshop on Historical Document Imaging and Processing | 2017

Text Retrieval for Japanese Historical Documents by Image Generation

Chisato Sugawara; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

Digitization of historical documents is growing rapidly. Text retrieval is a vital technology to facilitate the use of historical document images because of the large amount of data. In this paper, we propose a method for retrieving keywords in Japanese historical documents with text query. The proposed method automatically generates an image of the query text and retrieves regions in documents similar to the generated image by feature matching. We exploit a technique of deep learning to generate an image close to texts in Japanese historical document images. Furthermore, we use convolutional neural network to extract features robust to appearance variation of texts in documents, such as shade and shape of texts. We conducted the text retrieval experiments on the public dataset of Japanese historical documents in the Edo era. The experimental results show the effectiveness of the proposed method.


IEEE Transactions on Emerging Topics in Computing | 2017

Object-Based Video Coding by Visual Saliency and Temporal Correlation

Kazuya Ogasawara; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

When a disaster occurs, video communication is an effective way to disseminate large quantities of important information. However, video coding standards such as High Efficiency Video Coding (HEVC) compress entire videos, whatever the contents are; at low bit rates, the quality of significant objects deteriorates. In this paper, an object-based video coding method is proposed to address this problem. The proposed method extracts objects on the basis of visual saliency and temporal correlation between frames. Subsequently, we execute pre-processing which degrades the background quality before encoding the video with HEVC. This method can reduce the bit rate while preserving target object quality. Experimental comparison with HEVC demonstrates the superior performance of the proposed method.


Journal of Information Processing | 2016

Efficient Coding for Video Including Text Using Image Generation

Yosuke Nozue; Tomo Miyazaki; Yoshihiro Sugaya; Shinichiro Omachi

Text in video compressed by lossy compression at a low bitrate will easily be deteriorated, resulting in blurred text and a lower readability. In this paper, we propose a novel image coding method to preserve the readability of text in the video at a very low bitrate. During the encoding process, we estimate the parameters for each character of the text. Then, an image without text is generated and compressed. During the decoding process, we reconstruct video sequences with text from images without text and character images generated by the estimated parameters. The experimental results show the effectiveness of the proposed method in terms of the readability at a very low bitrate.

Collaboration


Dive into the Yoshihiro Sugaya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manabu Gouko

Tohoku Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge