Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michiaki Yasumura is active.

Publication


Featured researches published by Michiaki Yasumura.


ubiquitous computing | 2004

ActiveBelt: Belt-Type Wearable Tactile Display for Directional Navigation

Koji Tsukada; Michiaki Yasumura

In this paper we propose a novel wearable interface called “ActiveBelt” that enables users to obtain multiple directional information with the tactile sense. Since the information provided by the tactile sense is relatively unobtrusive, it is suited for daily use in mobile environments. However, many existing systems don’t transmit complex information via the tactile sense. Most of them send only simple signals, such as vibration in cellular phones. ActiveBelt is a novel belt-type wearable tactile display that can transmit directional information. We have developed prototype systems and applications, evaluated system performance and usability, and demonstrated the possibility of practical use.


Speech Communication | 2003

A corpus-based speech synthesis system with emotion

Akemi Iida; Nick Campbell; Fumito Higuchi; Michiaki Yasumura

We propose a new approach to synthesizing emotional speech by a corpus-based concatenative speech synthesis system (ATR CHATR) using speech corpora of emotional speech. In this study, neither emotional-dependent prosody prediction nor signal processing per se is performed for emotional speech. Instead, a large speech corpus is created per emotion to synthesize speech with the appropriate emotion by simple switching between the emotional corpora. This is made possible by the normalization procedure incorporated in CHATR that transforms its standard predicted prosody range according to the source database in use. We evaluate our approach by creating three kinds of emotional speech corpus (anger, joy, and sadness) from recordings of a male and a female speaker of Japanese. The acoustic characteristics of each corpus are different and the emotions identifiable. The acoustic characteristics of each emotional utterance synthesized by our method show clear correlations to those of each corpus. Perceptual experiments using synthesized speech confirmed that our method can synthesize recognizably emotional speech. We further evaluated the methods intelligibility and the overall impression it gives to the listeners. The results show that the proposed method can synthesize speech with a high intelligibility and gives a favorable impression. With these encouraging results, we have developed a workable text-to-speech system with emotion to support the immediate needs of nonspeaking individuals. This paper describes the proposed method, the design and acoustic characteristics of the corpora, and the results of the perceptual evaluations.


human factors in computing systems | 2004

Tapping vs. circling selections on pen-based devices: evidence for different performance-shaping factors

Sachi Mizobuchi; Michiaki Yasumura

Tapping-based selection methods for handheld devices may need to be supplemented with other approaches as increasingly complex tasks are carried out using those devices. Circling selection methods (such as the Lasso) allow users to select objects on a touch screen by circling with a pen. An experimental comparison of the selection time and accuracy between a circling method and a traditional tapping style of selection was carried out. The experiment used a two dimensional grid (varying in terms of the sizes and the distances of the targets). Analysis of variance showed that tapping selection time differed significantly depending on the size and spacing of the targets. In contrast, circling selection times differed significantly for different levels of target cohesiveness and shape complexity. The results are discussed in terms of implications for design of new pen-based selection methods for handheld devices, and also in terms of evaluation methodology for input selection methods.


international conference on human computer interaction | 2009

Emotions and Messages in Simple Robot Gestures

Jamy Li; Mark H. Chignell; Sachi Mizobuchi; Michiaki Yasumura

Understanding how people interpret robot gestures will aid design of effective social robots. We examine the generation and interpretation of gestures in a simple social robot capable of head and arm movement using two studies. In the first study, four participants created gestures with corresponding messages and emotions based on 12 different scenarios provided to them. The resulting gestures were then shown in the second study to 12 participants who judged which emotions and messages were being conveyed. Knowledge (present or absent) of the motivating scenario (context) for each gesture was manipulated as an experimental factor. Context was found to assist message understanding while providing only modest assistance to emotion recognition. While better than chance, both emotion (22%) and message understanding (40%) accuracies were relatively low. The results obtained are discussed in terms of implied guidelines for designing gestures for social robots.


human factors in computing systems | 2007

WillCam: a digital camera visualizing users. interest

Keita Watanabe; Koji Tsukada; Michiaki Yasumura

With the increased usage of digital cameras and camera-enabled mobile phones in recent years, large numbers of photos are being taken. Many of the photos that are taken are used little, if at all. Researchers and companies have developed systems where photos can be annotated or tagged to facilitate storage and retrieval. However, people are often unwilling to spend the time and effort to carry out annotation. To solve this problem, we focus on real-time annotation where photographs are annotated at the time when they are taken. We propose a novel digital camera,.WillCam., which enables users to capture various information, such as location, temperature, ambient noise, and photographer facial expression, in addition to the photo itself. WillCam also helps users express their interest -what object or information in the picture/scene is most important for them- visually.


asia pacific computer and human interaction | 1998

Emotional speech as an effective interface for people with special needs

Akemi Iida; Nick Campbell; Michiaki Yasumura

The paper describes an application concept of an affective communication system for people with disabilities and elderly people, summarizes the universal nature of emotion and its vocal expression, and reports on the work on designing a corpus database of emotional speech for a speech synthesis in the proposed system. Three corpora of emotional speech (joy, anger and sadness) have been designed and tested for the use with CHATR, the concatenated speech synthesis system at ATR. Each text corpus was designed to bring out a speakers emotion. The result of perceptual experiments was proved to be significant and so was the result of CHATR synthesized speech. This indicates that the subjects successfully identified the emotion types of the synthesized speech from implicit phonetic information and hence this study has proved the validity of using a corpus of emotional speech as a database for the concatenated speech synthesis system.


IEEE Transactions on Affective Computing | 2012

Identifying Emotion through Implicit and Explicit Measures: Cultural Differences, Cognitive Load, and Immersion

Danielle M. Lottridge; Mark H. Chignell; Michiaki Yasumura

Measures of emotion should accurately characterize the nature of an emotional experience and determine whether that experience is universal or unique to a subgroup or culture. We investigated the value of assessing emotion through skin conductance (an easy-to-interpret physiological measure) and sliders (frequently used and direct measures of perceived emotion). This paper describes findings from two experiments. The first evaluated various slider configurations and found that measured emotions successfully characterized the emotional nature of short videos. The second experiment collected the slider and skin conductance measures of emotion while one sample of Japanese participants and another sample of Canadian participants viewed longer videos. The measures were sensitive enough to identify cultural differences consistent with existing literature and were also able to identify parts of the experience where members from different cultures reacted consistently, pinpointing content that provoked a universal experience. We offer a toolkit of data interpretation techniques to gain more insight into the implicit and explicit emotion data: analyses for expressiveness and agreement that can infer states such as engagement and fatigue. We summarize the aspects of our measurement approach and toolkit in a model: the ability to distinguish the emotional nature of stimuli, individuals, and affective interaction.


ubiquitous computing | 2010

CastOven: a microwave oven with just-in-time video clips

Keita Watanabe; Shota Matsuda; Michiaki Yasumura; Masahiko Inami; Takeo Igarashi

In this paper, we propose a novel microwave oven called CastOven. CastOven is a microwave oven with a LCD display that enables people to enjoy videos while they are waiting for the completion of cooking. Current media contents force us to adjust our schedules to enjoy them. Media contents, especially movies, take specific time durations to watch them, but it is not easy to squeeze in time to do so in daily life. The system identifies the idle time in daily life and delivers an appropriate amount of media content to the user to enjoy during their idle time.


international conference on human-computer interaction | 2013

suGATALOG: Fashion Coordination System That Supports Users to Choose Everyday Fashion with Clothed Pictures

Ayaka Sato; Keita Watanabe; Michiaki Yasumura; Jun Rekimoto

When deciding what to wear, we normally have to consider several things, such as color and combination of clothes, as well as situations that might change every day, including the weather, what to do, where to go, and whom to meet with. Trying on many possible combinations can be very tedious; thus, computer support would be helpful. Therefore, we propose suGATALOG, a fashion coordination system that allows users to choose and coordinate clothes from their wardrobe. Previous studies have proposed systems using computer images of clothes to allow users to inspect their clothing ensemble. Our system uses pictures of users actually wearing the clothes to give a more realistic impression. suGATALOG compares several combinations by swapping top and bottom images. In this paper, we describe the system architecture and its user interface, as well as an evaluation experiment and a long-term trial test to verify the usefulness of the system.


international conference on human computer interaction | 2009

Time-Oriented Interface Design: Picking the Right Time and Method for Information Presentation

Keita Watanabe; Kei Sugawara; Shota Matsuda; Michiaki Yasumura

Today, people have far more access to relevant information than they can possibly consume. In this paper we describe a framework for Time-oriented Interface Design where information presentation and access is regulated according to when human activities afford opportunities for interacting with information. Information interfaces are then designed according to the time available during these opportunities, with the designs being constrained by salient aspects of the associated situations and contexts. In our view of time-oriented interface design there are four main types of situation where there may be time to view or interact with information: Spontaneous time; Waiting time; Background time; Interruption / Resumption. Information presented in these situations may be consumed without conflicting with the performance of other tasks. In the following presentation, the four types of information access situation are described. The use of time-oriented interface design is then illustrated by five prototype systems that have been developed in our laboratory. The paper will conclude with a discussion of lessons learned and an assessment of the potential for time-oriented human interface design to enhance future information interaction.

Collaboration


Dive into the Michiaki Yasumura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Tsukada

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge