Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masayoshi Tabuse is active.

Publication


Featured researches published by Masayoshi Tabuse.


robot and human interactive communication | 2009

A facial expression recognition for a speaker of a phoneme of vowel using thermal image processing and a speech recognition system

Yasunari Koda; Yasunari Yoshitomi; Mari Nakano; Masayoshi Tabuse

We have proposed a method for facial expression recognition for a speaker using thermal image processing and a speech recognition system. In this study, using a speech recognition system, we have improved our system to save thermal images at the three timing positions of just before speaking, and just speaking phonemes of the first and last vowels. In this method, the facial expressions were discriminable with the average recognition accuracy of 81% for four subjects, when he or she exhibited one of the intentional facial expressions of “angry”, “happy”, “neutral”, “sad”, and “surprise”.


Modern Physics Letters A | 1987

HIGHER DIMENSIONAL COSMOLOGY WITH STRING VACUUM ENERGY

Haruhiko Nishimura; Masayoshi Tabuse

We consider higher dimensional cosmology based on the closed bosonic string theory with the one-loop vacuum energy. It is concluded that the winding-effect of strings around tori has a chance to prevent the extra space from expanding, even though the curvature of torus is zero.


Artificial Life and Robotics | 2011

Facial expression recognition of a speaker using vowel judgment and thermal image processing

Yasunari Yoshitomi; Taro Asada; Kyouhei Shimada; Masayoshi Tabuse

We have previously developed a method for the recognition of the facial expression of a speaker. For facial expression recognition, we previously selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. By using the speech recognition system named Julius, thermal static images are saved at the timed positions of just before speaking, and when just speaking the phonemes of the first and last vowels. To implement our method, we recorded three subjects who spoke 25 Japanese first names which provided all combinations of the first and last vowels. These recordings were used to prepare first the training data and then the test data. Julius sometimes makes a mistake in recognizing the first and/or last vowel (s). For example, /a/ for the first vowel is sometimes misrecognized as /i/. In the training data, we corrected this misrecognition. However, the correction cannot be carried out in the test data. In the implementation of our method, the facial expressions of the three subjects were distinguished with a mean accuracy of 79.8% when they exhibited one of the intentional facial expressions of “angry,” “happy,” “neutral,” “sad,” and “surprised.” The mean accuracy of the speech recognition of vowels by Julius was 84.1%.


Artificial Life and Robotics | 2003

Khepera robots applied to highway autonomous mobiles

Tatsuro Shinchi; Masayoshi Tabuse; Tetsuro Kitazoe; A. Todaka

This article presents simulation models of autonomous Khepera robots which are assumed to be running on a highway. Each robot acts by following the fish-school algorithm. Although a school of fish does not need a special individual to lead it, an autonomous movement emerges from interactions among neighboring bodies. Our goal is multirobots which behave safely, with no accidents, solely through interactions with their surroundings. When Khepera robots run freely while sensing neighboring robots or the guard rails along the road by means of an infrared ray, the efficiency of their running, such as the distance covered and the number of accidents, is obtained with an evaluation function. Genetic algorithms (GA) with this evaluation function are applied to both the optimization of the discernible region, and the development of driving-type. As a result of optimization of the behavior models of a robot, multirobots could run smoothly while avoiding collisions with other robots or with guard rails, and yet run as fast as possible. The present study of autonomous multirobots approaches the realization of the autonomous control of vehicles running on a highway.


Journal of Information Security | 2011

An Authentication Method for Digital Audio Using a Discrete Wavelet Transform

Yasunari Yoshitomi; Taro Asada; Yohei Kinugawa; Masayoshi Tabuse

Recently, several digital watermarking techniques have been proposed for hiding data in the frequency domain of audio files in order to protect their copyrights. In general, there is a tradeoff between the quality of watermarked audio and the tolerance of watermarks to signal processing methods, such as compression. In previous research, we simultaneously improved the performance of both by developing a multipurpose optimization problem for deciding the positions of watermarks in the frequency domain of audio data and obtaining a near-optimum solution to the problem. This solution was obtained using a wavelet transform and a genetic algorithm. However, obtaining the near-optimum solution was very time consuming. To overcome this issue essentially, we have developed an authentication method for digital audio using a discrete wavelet transform. In contrast to digital watermarking, no additional information is inserted into the original audio by the proposed method, and the audio is authenticated using features extracted by the wavelet transform and characteristic coding in the proposed method. Accordingly, one can always use copyright-protected original audio. The experimental results show that the method has high tolerance of authentication to all types of MP3, AAC, and WMA compression. In addition, the processing time of the method is acceptable for every-day use.


Artificial Life and Robotics | 2011

A system for facial expression recognition of a speaker using front-view face judgment, vowel judgment, and thermal image processing

Tomoko Fujimura; Yasunari Yoshitomi; Taro Asada; Masayoshi Tabuse

For facial expression recognition, we selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. In this study, as a pre-processing module, we added a judgment function to distinguish a front-view face for facial expression recognition. A frame of the front-view face in a dynamic image is selected by estimating the face direction. The judgment function measures four feature parameters using thermal image processing, and selects the thermal images that have all the values of the feature parameters within limited ranges which were decided on the basis of training thermal images of front-view faces. As an initial investigation, we adopted the utterance of the Japanese name “Taro,” which is semantically neutral. The mean judgment accuracy of the front-view face was 99.5% for six subjects who changed their face direction freely. Using the proposed method, the facial expressions of six subjects were distinguishable with 84.0% accuracy when they exhibited one of the intentional facial expressions of “angry,” “happy,” “neutral,” “sad,” and “surprised.” We expect the proposed method to be applicable for recognizing facial expressions in daily conversation.


Archive | 2011

Vowel Judgment for Facial Expression Recognition of a Speaker

Yasunari Yoshitomi; Taro Asada; Masayoshi Tabuse

To better integrate robots into society, a robot should be able to interact in a friendly manner with humans. The aim of our research is to contribute to the development of a robot that can perceive human feelings and mental states. A robot that could do so could, for example, better take care of an elderly person, support a handicapped person in his or her live, encourage a person who looks sad, or advise an individual to stop working and take a rest when he or she looks tired. Our study concerns the first stage of the development of a robot that has the ability to detect visually human feelings or mental states. Although a mechanism for recognizing facial expressions has received considerable attention in the field of computer vision research (Harashima et al., 1989; Kobayashi & Hara, 1994; Mase, 1990, 1991; Matsuno et al., 1994; Yuille et al., 1989), currently it still falls far short of human capability—especially from the viewpoint of robustness under widely varying lighting conditions. One of the reasons for this is that the nuances of shade, reflection, and localized darkness—as the result of the inevitable changes in gray levels—influence the accuracy of the discernment of facial expressions. To develop a robust method of facial expression recognition applicable under widely varied lighting conditions, we do not use a visible ray (VR) image, instead we use an image produced by infrared rays (IR), which show temperature distributions of the face (Fujimura et al., 2011; Ikezoe et al., 2004; Koda et al., 2009; Nakano et al., 2009; Sugimoto et al., 2000; Yoshitomi et al., 1996, 1997a, 1997b, 2000, 2011a, 2011b; Yoshitomi, 2010). Although a human cannot detect IR, a robot can process the information contained in the thermal images created by IR. Therefore, as a new mode of robot vision, thermal image processing is a practical method that is viable under natural conditions. The timing for recognizing facial expressions also is important for a robot because processing can be time consuming. We adopted an utterance as the key to expressing human feelings or mental states because humans tend to say something to express their feelings (Fujimura et al., 2011; Ikezoe et al., 2004; Koda et al., 2009; Nakano et al., 2009; Yoshitomi et al., 2000; Yoshitomi, 2010). In conversation, we utter many phonemes. We have selected vowel utterances for use as timings to recognize facial expressions because the number of vowels is very limited, and the waveforms of vowels tend to have a bigger amplitude and a longer utterance period than consonants. Accordingly, the timing range of each vowel can be relatively easily decided by a speech recognition system.


Artificial Life and Robotics | 2011

Outdoor autonomous navigation using SURF features

Masayoshi Tabuse; Toshiki Kitaoka; Dai Nakai

In this article, we propose a speeded-up robust features (SURF)-based approach for outdoor autonomous navigation. In this approach, we capture environmental images using an omni-directional camera and extract features of these images using SURF. We treat these features as landmarks to estimate a robot’s self-location and direction of motion. SURF features are invariant under scale changes and rotation, and are robust under image noise, changes in light conditions, and changes of viewpoint. Therefore, SURF features are appropriate for the self-location estimation and navigation of a robot. The mobile robot navigation method consists of two modes, the teaching mode and the navigation mode. In the teaching mode, we teach a navigation course. In the navigation mode, the mobile robot navigates along the teaching course autonomously. In our experiment, the outdoor teaching course was about 150 m long, the average speed was 2.9 km/h, and the maximum trajectory error was 3.3 m. The processing time of SURF was several times shorter than that of scale-invariant feature transform (SIFT). Therefore, the navigation speed of the mobile robot was similar to the walking speed of a person.


International Journal of Modern Physics A | 1990

CLASSIFICATION OF SYMMETRY BREAKING PATTERNS IN HETEROTIC STRINGS ON Z3 ORBIFOLD

Akira Fujitsu; Tetsuro Kitazoe; Masayoshi Tabuse; Haruhiko Nishimura

Symmetry breaking patterns of E8×E′8 on Z3 orbifold are completely analyzed for physically acceptable models by using a kind of Weyl transformation and modular invariance. It is shown that SU(3)×SU(2)×U(1)n cannot be obtained by using any kind of a shift vector and Wilson lines and that the minimal group including SU(3)×SU(2)×U(1) is SU(3)2×U(1)n which is realized by a shift vector and Wilson lines. All models accessible to realistic low energy theories are listed out and an example is described to get SU(3)×SU(2)×UY(1) by using Higgs mechanism with anomalous U(1) channel. The example is able to give massless quark, lepton and Higgs to suppress proton decay by Planck mass scale and to make left-handed neutrino massless.


International Journal of Advanced Robotic Systems | 2013

Recognition of a Baby's Emotional Cry Towards Robotics Baby Caregiver

Shota Yamamoto; Yasunari Yoshitomi; Masayoshi Tabuse; Kou Kushida; Taro Asada

We developed a method for pattern recognition of babys emotions (discomfortable, hungry, or sleepy) expressed in the babys cries. A 32-dimensional fast Fourier transform is performed for sound form clips, detected by our reported method and used as training data. The power of the sound form judged as a silent region is subtracted from each power of the frequency element. The power of each frequency element after the subtraction is treated as one of the elements of the feature vector. We perform principal component analysis (PCA) for the feature vectors of the training data. The emotion of the baby is recognized by the nearest neighbor criterion applied to the feature vector obtained from the test data of sound form clips after projecting the feature vector on the PCA space from the training data. Then, the emotion with the highest frequency among the recognition results for a sound form clip is judged as the emotion expressed by the babys cry. We successfully applied the proposed method to pattern recognition of babys emotions. The present investigation concerns the first stage of the development of a robotics baby caregiver that has the ability to detect babys emotions. In this first stage, we have developed a method for detecting babys emotions. We expect that the proposed method could be used in robots that can help take care of babies.

Collaboration


Dive into the Masayoshi Tabuse's collaboration.

Top Co-Authors

Avatar

Yasunari Yoshitomi

Kyoto Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Taro Asada

Kyoto Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryota Kato

Kyoto Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jin Narumoto

Kyoto Prefectural University of Medicine

View shared research outputs
Top Co-Authors

Avatar

Yuu Nakanishi

Kyoto Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noriaki Kuwahara

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge