Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshinori Takeuchi is active.

Publication


Featured researches published by Yoshinori Takeuchi.


PLOS ONE | 2014

Measuring Streetscape Complexity Based on the Statistics of Local Contrast and Spatial Frequency

André Borges Cavalcante; Ahmed Mansouri; Lemya Kacha; Allan Kardec Barros; Yoshinori Takeuchi; Naoji Matsumoto; Noboru Ohnishi

Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.


machine vision applications | 2013

Informative patches sampling for image classification by utilizing bottom-up and top-down information

Shuang Bai; Tetsuya Matsumoto; Yoshinori Takeuchi; Hiroaki Kudo; Noboru Ohnishi

In image classification based on bag of visual words framework, image patches used for creating image representations affect the classification performance significantly. However, currently, patches are sampled mainly based on processing low-level image information or just extracted regularly or randomly. These methods are not effective, because patches extracted through these approaches are not necessarily discriminative for image categorization. In this paper, we propose to utilize both bottom-up information through processing low-level image information and top-down information through exploring statistical properties of training image grids to extract image patches. In the proposed work, an input image is divided into regular grids, each of which is evaluated based on its bottom-up information and/or top-down information. Subsequently, every grid is assigned a saliency value based on its evaluation result, so that a saliency map can be created for the image. Finally, patch sampling from the input image is performed on the basis of the obtained saliency map. Furthermore, we propose a method to fuse these two kinds of information. The proposed methods are evaluated on both object categories and scene categories. Experiment results demonstrate their effectiveness.


international conference on computers helping people with special needs | 2016

Support System for Lecture Captioning Using Keyword Detection by Automatic Speech Recognition

Naofumi Ikeda; Yoshinori Takeuchi; Tetsuya Matsumoto; Hiroaki Kudo; Noboru Ohnishi

We propose a support system for lecture captioning. The system can detect the keywords of a lecture and present them to captionists. The captionists can understand what an instructor said even when they cannot understand the keywords, and can input keywords rapidly by pressing the corresponding function key. The system detects the keywords by automatic speech recognition (ASR). To improve the detection rate of keywords, we adapt the language model of ASR using web documents. We collect 2,700 web documents, which include 1.2 million words and 5,800 sentences. We conducted an experiment to detect keywords of a real lecture and showed that the system can achieve higher F-measure of 0.957 than that of a base language model (0.871).


conference on computers and accessibility | 2013

A system helping blind people to get character information in their surrounding environment

Noboru Ohnishi; Tetsuya Matsumoto; Hiroaki Kudo; Yoshinori Takeuchi

We propose a system helping blind people to get character information in their surrounding environment, such as merchandise information (name, price, and best-before/use-by date) and restaurant menu (name and price). The system consists of a computer, a wireless camera/scanner and an earphone. It processes images captured/scanned by a user and extracts character regions in the image by using Support Vector Machine (SVM). Applying Optical Character Recognition (OCR) to the extracted regions, the system outputs the character information as synthesized speech.


international conference on computers helping people with special needs | 2012

A system for matching mathematical formulas spoken during a lecture with those displayed on the screen for use in remote transcription

Yoshinori Takeuchi; Hironori Kawaguchi; Noboru Ohnishi; Daisuke Wakatsuki; Hiroki Minagawa

A system is described for extracting and matching mathematical formulas presented orally during a lecture with those simultaneously displayed on the lecture room screen. Each mathematical formula spoken by the lecturer and displayed on the screen is extracted and shown to the transcriber. Investigation showed that, in a lecture in which many mathematical formulas were presented, about 80% of them were both spoken and pointed to on the screen, meaning that the system can help a transcriber correctly transcribe up to 80% of the formulas presented. A speech recognition system is used to extract the formulas from the lecturers speech, and a system that analyzes the trajectory of the end of the stick pointer is used to extract the formulas from the projected images. This information is combined and used to match the pointed-to formulas with the spoken ones. In testing using actual lectures, this system extracted and matched 71.4% of the mathematical formulas both spoken and displayed and presented them for transcription with a precision of 89.4%.


international conference on computers helping people with special needs | 2018

Communication Support System of Smart Glasses for the Hearing Impaired

Daiki Watanabe; Yoshinori Takeuchi; Tetsuya Matsumoto; Hiroaki Kudo; Noboru Ohnishi

In this research, we propose a novel system for displaying captions of conversation content on smart glasses. We propose natural communication using smart glasses that is more like a conversation between hearing people than conventional communication methods such as sign language or handwriting. The system translates spoken words into text and displays them on the screen of the smart glasses. It equips four microphones, localizes the direction of the sound source, and distinguishes the angular direction of the sound source. Using signals from four microphones, the system can enhance voices. The voice enhancement technique is required to improve the automatic speech recognition rate in noisy environments. Experimental results showed that the system can estimate the angular direction of a voice and recognize more than 90% of words that are spoken. The subject experiment showed that the proposed system had a similarity score close to an existing smartphone application.


international conference on computers helping people with special needs | 2018

Detection of Input-Difficult Words by Automatic Speech Recognition for PC Captioning

Yoshinori Takeuchi; Daiki Kojima; Shoya Sano; Shinji Kanamori

Hearing-impaired students often need complementary technologies to assist them in understanding college lectures. Several universities already use PC captioning. Captionists sometime input unfamiliar technical terms and proper nouns in a lecture inaccurately. We call these words “input-difficult words (IDWs).” In this research, we evaluate performance-detecting IDWs by using real lectures from our university. We propose a method to automatically extract IDWs from lecture materials. We conducted an experiment to measure performance-detecting IDWs from lectures by changing the interpolation weight of the language model. In this experiment, we used four real lectures. A high F-measure of 0.889 was achieved.


international conference on computers helping people with special needs | 2016

System Supporting Independent Walking of the Visually Impaired

Mitsuki Nishikiri; Takehiro Sakai; Hiroaki Kudo; Tetsuya Matsumoto; Yoshinori Takeuchi; Noboru Ohnishi

This paper proposes an integrated navigation system that supports the independent walking of the visually impaired. It supports route guidance and adjustments, zebra-crossing detection and guidance, pedestrian traffic signal detection and discrimination, and localization of the entrance doors of the buildings of destinations. This system was implemented on an Android smartphone. As a result of experiments, our system’s detection rate for zebra crossings was about 72 % and about 80 % for traffic signals.


international symposium on neural networks | 2014

Measurement of confusion color pairs for dichromats in order to use applications supporting color vision deficiency

Hiroki Takagi; Hiroaki Kudo; Tetsuya Matsumoto; Noboru Ohnishi; Yoshinori Takeuchi

Recently, various applications of mobile devices supporting color vision deficiency are released. However, applications supporting color vision deficiency may not work as designed performance. Color calibration and personalization are needed when the user uses such an application, because a color representation performance of devices is distinct from each other, and users may also have individual differences in characteristics of color vision. Thus, confusion color pairs will change slightly. Conversely, it will be a clue for color calibration or personalization in oder to reduce changes. Here, we propose methods determining confusion color pairs by a simple operation to achieve it. We estimate color pairs by the procedure as a bisection method in order to determine them. A user is presented with the three visual stimuli (a reference and two targets). The user selects one of test targets whose color is perceived more similar to references color. We focused on how to select visual stimulis colors. We propose two methods to select ones, which are represented the coordinates on uv-chromaticity diagram. One method uses colors on a circumference whose center corresponds to the white point. Another method uses colors on parallel lines to the line passing through primary colors (green and blue). We obtained results that it shows comparable performance determining color pairs to previous studies. Results of lines method show that performance is better than the circumference method. And, its measurement time is reduced.


international conference on neural information processing | 2014

Interactive Color Correction of Display by Dichromatic User

Hiroki Takagi; Hiroaki Kudo; Tetsuya Matsumoto; Yoshinori Takeuchi; Noboru Ohnishi

Applications supporting dichromats based on confusion loci are proposed. We propose an interactive method to correct display color by measuring confusion color pairs for using such an application. The method measures 11 confusion color pairs on a display that shows an unknown color gamut. It estimates the most similar pattern from the confusion loci database, which is composed of scaling up/down one of R, G and B. It corrects display color to the sRGB gamut. We showed a tendency of confusion loci pattern for scale change and measured results for a dichromat. It improved the color difference in six of eight color settings.

Collaboration


Dive into the Yoshinori Takeuchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Mansouri

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge